Sample records for compatible computers running

  1. Windows Program For Driving The TDU-850 Printer

    NASA Technical Reports Server (NTRS)

    Parrish, Brett T.

    1995-01-01

    Program provides WYSIWYG compatibility between video display and printout. PDW is Microsoft Windows printer-driver computer program for use with Raytheon TDU-850 printer. Provides previously unavailable linkage between printer and IBM PC-compatible computers running Microsoft Windows. Enhances capabilities of Raytheon TDU-850 hardcopier by emulating all textual and graphical features normally supported by laser/ink-jet printers and makes printer compatible with any Microsoft Windows application. Also provides capabilities not found in laser/ink-jet printer drivers by providing certain Windows applications with ability to render high quality, true gray-scale photographic hardcopy on TDU-850. Written in C language.

  2. The Functional Measurement Experiment Builder suite: two Java-based programs to generate and run functional measurement experiments.

    PubMed

    Mairesse, Olivier; Hofmans, Joeri; Theuns, Peter

    2008-05-01

    We propose a free, easy-to-use computer program that does not requires prior knowledge of computer programming to generate and run experiments using textual or pictorial stimuli. Although the FM Experiment Builder suite was initially programmed for building and conducting FM experiments, it can also be applied for non-FM experiments that necessitate randomized, single, or multifactorial designs. The program is highly configurable, allowing multilingual use and a wide range of different response formats. The outputs of the experiments are Microsoft Excel compatible .xls files that allow easy copy-paste of the results into Weiss's FM CalSTAT program (2006) or any other statistical package. Its Java-based structure is compatible with both Windows and Macintosh operating systems, and its compactness (< 1 MB) makes it easily distributable over the Internet.

  3. Program Processes Thermocouple Readings

    NASA Technical Reports Server (NTRS)

    Quave, Christine A.; Nail, William, III

    1995-01-01

    Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.

  4. Pyrolaser Operating System

    NASA Technical Reports Server (NTRS)

    Roberts, Floyd E., III

    1994-01-01

    Software provides for control and acquisition of data from optical pyrometer. There are six individual programs in PYROLASER package. Provides quick and easy way to set up, control, and program standard Pyrolaser. Temperature and emisivity measurements either collected as if Pyrolaser in manual operating mode or displayed on real-time strip charts and stored in standard spreadsheet format for posttest analysis. Shell supplied to allow macros, which are test-specific, added to system easily. Written using Labview software for use on Macintosh-series computers running System 6.0.3 or later, Sun Sparc-series computers running Open-Windows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatible computers running Microsoft Windows 3.1 or later.

  5. Controlling Laboratory Processes From A Personal Computer

    NASA Technical Reports Server (NTRS)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  6. REFERENCE MANUAL FOR RASSMIT VERSION 2.1: SUB-SLAB DEPRESSURIZATION SYSTEM DESIGN PERFORMANCE SIMULATION PROGRAM

    EPA Science Inventory

    The report is a reference manual for RASSMlT Version 2.1, a computer program that was developed to simulate and aid in the design of sub-slab depressurization systems used for indoor radon mitigation. The program was designed to run on DOS-compatible personal computers to ensure ...

  7. SSL - THE SIMPLE SOCKETS LIBRARY

    NASA Technical Reports Server (NTRS)

    Campbell, C. E.

    1994-01-01

    The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.

  8. Potential-Field Geophysical Software for the PC

    USGS Publications Warehouse

    ,

    1995-01-01

    The computer programs of the Potential-Field Software Package run under the DOS operating system on IBM-compatible personal computers. They are used for the processing, display, and interpretation of potential-field geophysical data (gravity- and magnetic-field measurements) and other data sets that can be represented as grids or profiles. These programs have been developed on a variety of computer systems over a period of 25 years by the U.S. Geological Survey.

  9. Computer Simulation Of Cyclic Oxidation

    NASA Technical Reports Server (NTRS)

    Probst, H. B.; Lowell, C. E.

    1990-01-01

    Computer model developed to simulate cyclic oxidation of metals. With relatively few input parameters, kinetics of cyclic oxidation simulated for wide variety of temperatures, durations of cycles, and total numbers of cycles. Program written in BASICA and run on any IBM-compatible microcomputer. Used in variety of ways to aid experimental research. In minutes, effects of duration of cycle and/or number of cycles on oxidation kinetics of material surveyed.

  10. Computing Operating Characteristics Of Bearing/Shaft Systems

    NASA Technical Reports Server (NTRS)

    Moore, James D.

    1996-01-01

    SHABERTH computer program predicts operating characteristics of bearings in multibearing load-support system. Lubricated and nonlubricated bearings modeled. Calculates loads, torques, temperatures, and fatigue lives of ball and/or roller bearings on single shaft. Provides for analysis of reaction of system to termination of supply of lubricant to bearings and other lubricated mechanical elements. Valuable in design and analysis of shaft/bearing systems. Two versions of SHABERTH available. Cray version (LEW-14860), "Computing Thermal Performances Of Shafts and Bearings". IBM PC version (MFS-28818), written for IBM PC-series and compatible computers running MS-DOS.

  11. IRDS prototyping with applications to the representation of EA/RA models

    NASA Technical Reports Server (NTRS)

    Lekkos, Anthony A.; Greenwood, Bruce

    1988-01-01

    The requirements and system overview for the Information Resources Dictionary System (IRDS) are described. A formal design specification for a scaled down IRDS implementation compatible with the proposed FIPS IRDS standard is contained. The major design objectives for this IRDS will include a menu driven user interface, implementation of basic IRDS operations, and PC compatibility. The IRDS was implemented using Smalltalk/5 object oriented programming system and an ATT 6300 personal computer running under MS-DOS 3.1. The difficulties encountered in using Smalltalk are discussed.

  12. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    PubMed Central

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H.A.; Hlavacek, William S.; Posner, Richard G.

    2016-01-01

    Summary: Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. Availability and implementation: BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary information: Supplementary data are available at Bioinformatics online. Contact: bionetgen.help@gmail.com PMID:26556387

  13. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments.

    PubMed

    Thomas, Brandon R; Chylek, Lily A; Colvin, Joshua; Sirimulla, Suman; Clayton, Andrew H A; Hlavacek, William S; Posner, Richard G

    2016-03-01

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive. BioNetFit can be used on stand-alone Mac, Windows/Cygwin, and Linux platforms and on Linux-based clusters running SLURM, Torque/PBS, or SGE. The BioNetFit source code (Perl) is freely available (http://bionetfit.nau.edu). Supplementary data are available at Bioinformatics online. bionetgen.help@gmail.com. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Operating System For Numerically Controlled Milling Machine

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1992-01-01

    OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.

  15. Portability studies of modular data base managers. Interim reports. [Running CDC's DATATRAN 2 on IBM 360/370 and IBM's JOSHUA on CDC computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopp, H.J.; Mortensen, G.A.

    1978-04-01

    Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less

  16. Program Helps Generate And Manage Graphics

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1994-01-01

    Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.

  17. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (AMDAHL VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  18. NASADIG - NASA DEVICE INDEPENDENT GRAPHICS LIBRARY (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Rogers, J. E.

    1994-01-01

    The NASA Device Independent Graphics Library, NASADIG, can be used with many computer-based engineering and management applications. The library gives the user the opportunity to translate data into effective graphic displays for presentation. The software offers many features which allow the user flexibility in creating graphics. These include two-dimensional plots, subplot projections in 3D-space, surface contour line plots, and surface contour color-shaded plots. Routines for three-dimensional plotting, wireframe surface plots, surface plots with hidden line removal, and surface contour line plots are provided. Other features include polar and spherical coordinate plotting, world map plotting utilizing either cylindrical equidistant or Lambert equal area projection, plot translation, plot rotation, plot blowup, splines and polynomial interpolation, area blanking control, multiple log/linear axes, legends and text control, curve thickness control, and multiple text fonts (18 regular, 4 bold). NASADIG contains several groups of subroutines. Included are subroutines for plot area and axis definition; text set-up and display; area blanking; line style set-up, interpolation, and plotting; color shading and pattern control; legend, text block, and character control; device initialization; mixed alphabets setting; and other useful functions. The usefulness of many routines is dependent on the prior definition of basic parameters. The program's control structure uses a serial-level construct with each routine restricted for activation at some prescribed level(s) of problem definition. NASADIG provides the following output device drivers: Selanar 100XL, VECTOR Move/Draw ASCII and PostScript files, Tektronix 40xx, 41xx, and 4510 Rasterizer, DEC VT-240 (4014 mode), IBM AT/PC compatible with SmartTerm 240 emulator, HP Lasergrafix Film Recorder, QMS 800/1200, DEC LN03+ Laserprinters, and HP LaserJet (Series III). NASADIG is written in FORTRAN and is available for several platforms. NASADIG 5.7 is available for DEC VAX series computers running VMS 5.0 or later (MSC-21801), Cray X-MP and Y-MP series computers running UNICOS (COS-10049), and Amdahl 5990 mainframe computers running UTS (COS-10050). NASADIG 5.1 is available for UNIX-based operating systems (MSC-22001). The UNIX version has been successfully implemented on Sun4 series computers running SunOS, SGI IRIS computers running IRIX, Hewlett Packard 9000 computers running HP-UX, and Convex computers running Convex OS (MSC-22001). The standard distribution medium for MSC-21801 is a set of two 6250 BPI 9-track magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. The standard distribution medium for COS-10049 and COS-10050 is a 6250 BPI 9-track magnetic tape in UNIX tar format. Other distribution media and formats may be available upon request. The standard distribution medium for MSC-22001 is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. Alternate distribution media and formats are available upon request. With minor modification, the UNIX source code can be ported to other platforms including IBM PC/AT series computers and compatibles. NASADIG is also available bundled with TRASYS, the Thermal Radiation Analysis System (COS-10026, DEC VAX version; COS-10040, CRAY version).

  19. The RANDOM computer program: A linear congruential random number generator

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  20. Simple and powerful visual stimulus generator.

    PubMed

    Kremlácek, J; Kuba, M; Kubová, Z; Vít, F

    1999-02-01

    We describe a cheap, simple, portable and efficient approach to visual stimulation for neurophysiology which does not need any special hardware equipment. The method based on an animation technique uses the FLI autodesk animator format. This form of the animation is replayed by a special program ('player') providing synchronisation pulses toward recording system via parallel port. The 'player is running on an IBM compatible personal computer under MS-DOS operation system and stimulus is displayed on a VGA computer monitor. Various stimuli created with this technique for visual evoked potentials (VEPs) are presented.

  1. Estimating aquifer transmissivity from specific capacity using MATLAB.

    PubMed

    McLin, Stephen G

    2005-01-01

    Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.

  2. InfoTrac TFD: a microcomputer implementation of the Transcription Factor Database TFD with a graphical user interface.

    PubMed

    Hoeck, W G

    1994-06-01

    InfoTrac TFD provides a graphical user interface (GUI) for viewing and manipulating datasets in the Transcription Factor Database, TFD. The interface was developed in Filemaker Pro 2.0 by Claris Corporation, which provides cross platform compatibility between Apple Macintosh computers running System 7.0 and higher and IBM-compatibles running Microsoft Windows 3.0 and higher. TFD ASCII-tables were formatted to fit data into several custom data tables using Add/Strip, a shareware utility and Filemaker Pro's lookup feature. The lookup feature was also put to use to allow TFD data tables to become linked within a flat-file database management system. The 'Navigator', consisting of several pop-up menus listing transcription factor abbreviations, facilitates the search for transcription factor entries. Data are presented onscreen in several layouts, that can be further customized by the user. InfoTrac TFD makes the transcription factor database accessible to a much wider community of scientists by making it available on two popular microcomputer platforms.

  3. GPR data processing computer software for the PC

    USGS Publications Warehouse

    Lucius, Jeffrey E.; Powers, Michael H.

    2002-01-01

    The computer software described in this report is designed for processing ground penetrating radar (GPR) data on Intel-compatible personal computers running the MS-DOS operating system or MS Windows 3.x/95/98/ME/2000. The earliest versions of these programs were written starting in 1990. At that time, commercially available GPR software did not meet the processing and display requirements of the USGS. Over the years, the programs were refined and new features and programs were added. The collection of computer programs presented here can perform all basic processing of GPR data, including velocity analysis and generation of CMP stacked sections and data volumes, as well as create publication quality data images.

  4. Aviation Design Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    DARcorporation developed a General Aviation CAD package through a Small Business Innovation Research contract from Langley Research Center. This affordable, user-friendly preliminary design system for General Aviation aircraft runs on the popular 486 IBM-compatible personal computers. Individuals taking the home-built approach, small manufacturers of General Aviation airplanes, as well as students and others interested in the analysis and design of aircraft are possible users of the package. The software can cut design and development time in half.

  5. Integrating a Trusted Computing Base Extension Server and Secure Session Server into the LINUX Operating System

    DTIC Science & Technology

    2001-09-01

    Readily Available Linux has been copyrighted under the terms of the GNU General Public 5 License (GPL)1. This is a license written by the Free...GNOME and KDE . d. Portability Linux is highly compatible with many common operating systems. For...using suitable libraries, Linux is able to run programs written for other operating systems. [Ref. 8] 1 The GNU Project is coordinated by the

  6. HAL/S-FC compiler system specifications

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  7. VTGRAPH - GRAPHIC SOFTWARE TOOL FOR VT TERMINALS

    NASA Technical Reports Server (NTRS)

    Wang, C.

    1994-01-01

    VTGRAPH is a graphics software tool for DEC/VT or VT compatible terminals which are widely used by government and industry. It is a FORTRAN or C-language callable library designed to allow the user to deal with many computer environments which use VT terminals for window management and graphic systems. It also provides a PLOT10-like package plus color or shade capability for VT240, VT241, and VT300 terminals. The program is transportable to many different computers which use VT terminals. With this graphics package, the user can easily design more friendly user interface programs and design PLOT10 programs on VT terminals with different computer systems. VTGRAPH was developed using the ReGis Graphics set which provides a full range of graphics capabilities. The basic VTGRAPH capabilities are as follows: window management, PLOT10 compatible drawing, generic program routines for two and three dimensional plotting, and color graphics or shaded graphics capability. The program was developed in VAX FORTRAN in 1988. VTGRAPH requires a ReGis graphics set terminal and a FORTRAN compiler. The program has been run on a DEC MicroVAX 3600 series computer operating under VMS 5.0, and has a virtual memory requirement of 5KB.

  8. Speed-limited particle-in-cell (SLPIC) simulation

    NASA Astrophysics Data System (ADS)

    Werner, Gregory; Cary, John; Jenkins, Thomas

    2016-10-01

    Speed-limited particle-in-cell (SLPIC) simulation is a new method for particle-based plasma simulation that allows increased timesteps in cases where the timestep is determined (e.g., in standard PIC) not by the smallest timescale of interest, but rather by an even smaller physical timescale that affects numerical stability. For example, SLPIC need not resolve the plasma frequency if plasma oscillations do not play a significant role in the simulation; in contrast, standard PIC must usually resolve the plasma frequency to avoid instability. Unlike fluid approaches, SLPIC retains a fully-kinetic description of plasma particles and includes all the same physical phenomena as PIC; in fact, if SLPIC is run with a PIC-compatible timestep, it is identical to PIC. However, unlike PIC, SLPIC can run stably with larger timesteps. SLPIC has been shown to be effective for finding steady-state solutions for 1D collisionless sheath problems, greatly speeding up computation despite a large ion/electron mass ratio. SLPIC is a relatively small modification of standard PIC, with no complexities that might degrade parallel efficiency (compared to PIC), and is similarly compatible with PIC field solvers and boundary conditions.

  9. The Walter Reed performance assessment battery.

    PubMed

    Thorne, D R; Genser, S G; Sing, H C; Hegge, F W

    1985-01-01

    This paper describes technical details of a computerized psychological test battery designed for examining the effects of various state-variables on a representative sample of normal psychomotor, perceptual and cognitive tasks. The duration, number and type of tasks can be customized to different experimental needs, and then administered and analyzed automatically, at intervals as short as one hour. The battery can be run on either the Apple-II family of computers or on machines compatible with the IBM-PC.

  10. CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.

    PubMed

    Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav

    2017-11-15

    CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  11. QDENSITY—A Mathematica quantum computer simulation

    NASA Astrophysics Data System (ADS)

    Juliá-Díaz, Bruno; Burdis, Joseph M.; Tabakin, Frank

    2009-03-01

    This Mathematica 6.0 package is a simulation of a Quantum Computer. The program provides a modular, instructive approach for generating the basic elements that make up a quantum circuit. The main emphasis is on using the density matrix, although an approach using state vectors is also implemented in the package. The package commands are defined in Qdensity.m which contains the tools needed in quantum circuits, e.g., multiqubit kets, projectors, gates, etc. New version program summaryProgram title: QDENSITY 2.0 Catalogue identifier: ADXH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 055 No. of bytes in distributed program, including test data, etc.: 227 540 Distribution format: tar.gz Programming language: Mathematica 6.0 Operating system: Any which supports Mathematica; tested under Microsoft Windows XP, Macintosh OS X, and Linux FC4 Catalogue identifier of previous version: ADXH_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 914 Classification: 4.15 Does the new version supersede the previous version?: Offers an alternative, more up to date, implementation Nature of problem: Analysis and design of quantum circuits, quantum algorithms and quantum clusters. Solution method: A Mathematica package is provided which contains commands to create and analyze quantum circuits. Several Mathematica notebooks containing relevant examples: Teleportation, Shor's Algorithm and Grover's search are explained in detail. A tutorial, Tutorial.nb is also enclosed. Reasons for new version: The package has been updated to make it fully compatible with Mathematica 6.0 Summary of revisions: The package has been updated to make it fully compatible with Mathematica 6.0 Running time: Most examples included in the package, e.g., the tutorial, Shor's examples, Teleportation examples and Grover's search, run in less than a minute on a Pentium 4 processor (2.6 GHz). The running time for a quantum computation depends crucially on the number of qubits employed.

  12. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  13. A graph-based computational framework for simulation and optimisation of coupled infrastructure networks

    DOE PAGES

    Jalving, Jordan; Abhyankar, Shrirang; Kim, Kibaek; ...

    2017-04-24

    Here, we present a computational framework that facilitates the construction, instantiation, and analysis of large-scale optimization and simulation applications of coupled energy networks. The framework integrates the optimization modeling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools use a common graphbased abstraction that enables us to achieve compatibility between data structures and to build applications that use network models of different physical fidelity. We also describe how to embed these tools within complex computational workflows using SWIFT, which is a tool that facilitates parallel execution of multiple simulation runs and management of input and output data.more » We discuss how to use these capabilities to target coupled natural gas and electricity systems.« less

  14. Aircraft /Stores Compatibility Symposium Proceedings, Held at Arlington, Virginia on 2-4 September 1975. Volume 1

    DTIC Science & Technology

    1975-09-04

    for a large number of candidate fairings would have been too time consuming and the computer time costly for so many runs. This necessitated a paring...factors affect- ing a store’s separation behavior are the forces and moments on the store while in the captive carriage position. Tests have shown that...oscillation started by an initial yaw angle-of-attack would not. The theoretical expression, Eq. (7), confirms this behavior for undamped (T-o.o

  15. Data acquisition, processing and firing aid software for multichannel EMP simulation

    NASA Astrophysics Data System (ADS)

    Eumurian, Gregoire; Arbaud, Bruno

    1986-08-01

    Electromagnetic compatibility testing yields a large quantity of data for systematic analysis. An automated data acquisition system has been developed. It is based on standard EMP instrumentation which allows a pre-established program to be followed whilst orientating the measurements according to the results obtained. The system is controlled by a computer running interactive programs (multitask windows, scrollable menus, mouse, etc.) which handle the measurement channels, files, displays and process data in addition to providing an aid to firing.

  16. Acceptance of direct physician access to a computer-based patient record in a managed care setting.

    PubMed

    Dewey, J B; Manning, P; Brandt, S

    1993-01-01

    Kaiser Permanente Mid-Atlantic States has developed a fully integrated outpatient information system which currently runs on an IBM ES9000 on a VM platform written in MUMPS. The applications include Lab, Radiology, Transcription, Appointments. Pharmacy, Encounter tracking, Hospitalizations, Referrals, Phone Advice, Pap tracking, Problem list, Immunization tracking, and Patient demographics. They are department specific and require input and output from a dumb terminal. We have developed a physician's work station to access this information using PC compatible computers running Microsoft Windows and a custom Microsoft Visual Basic 2.0 environment which draws from these 14 applications giving the physician a comprehensive view of all electronic medical records. Through rapid prototyping, voluntary participation, formal training and gradual implementation we have created an enthusiastic response. 95% of our physician PC users access the system each month. The use ranges from 0.2 to 3.0 screens of data viewed per patient visit. This response continues to drive the process toward still greater user acceptance and further practice enhancement.

  17. Guidelines for developing vectorizable computer programs

    NASA Technical Reports Server (NTRS)

    Miner, E. W.

    1982-01-01

    Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.

  18. Microelectromechanical reprogrammable logic device.

    PubMed

    Hafiz, M A A; Kosuru, L; Younis, M I

    2016-03-29

    In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme.

  19. Microelectromechanical reprogrammable logic device

    PubMed Central

    Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.

    2016-01-01

    In modern computing, the Boolean logic operations are set by interconnect schemes between the transistors. As the miniaturization in the component level to enhance the computational power is rapidly approaching physical limits, alternative computing methods are vigorously pursued. One of the desired aspects in the future computing approaches is the provision for hardware reconfigurability at run time to allow enhanced functionality. Here we demonstrate a reprogrammable logic device based on the electrothermal frequency modulation scheme of a single microelectromechanical resonator, capable of performing all the fundamental 2-bit logic functions as well as n-bit logic operations. Logic functions are performed by actively tuning the linear resonance frequency of the resonator operated at room temperature and under modest vacuum conditions, reprogrammable by the a.c.-driving frequency. The device is fabricated using complementary metal oxide semiconductor compatible mass fabrication process, suitable for on-chip integration, and promises an alternative electromechanical computing scheme. PMID:27021295

  20. [Use of computer programs in the education and self-management of patients with insulin-dependent diabetes mellitus].

    PubMed

    Picco, P; Di Rocco, M; Buoncompagni, A; Gandullia, P; Lattere, M; Borrone, C

    1991-01-01

    A computerized program for children and adolescents with insulin dependent diabetes mellitus (IDDM) and their parents has been developed. Our program consists of computed assisted education, of aid to routine insulin dosage self adjustment and of records of home and hospital controls. Technically it has been implemented in DBIII plus: it runs on IBM PC computers (and compatible computers) and MS DOS (version 3.0 and later). Computed assisted education consists of 80 multiples choice questions divided in 2 parts: the first concerns basic informations about diabetes while the second one behavioral attitudes of patient in particular situations. Explanations are displayed after every question, apart from correct of incorrect choice. Help for self-adjustment of routine insulin dosage is offered in the third part. Finally daily home urine and/or blood controls and results of hospital admissions are stored in a database.

  1. Atomdroid: a computational chemistry tool for mobile platforms.

    PubMed

    Feldt, Jonas; Mata, Ricardo A; Dieterich, Johannes M

    2012-04-23

    We present the implementation of a new molecular mechanics program designed for use in mobile platforms, the first specifically built for these devices. The software is designed to run on Android operating systems and is compatible with several modern tablet-PCs and smartphones available in the market. It includes molecular viewer/builder capabilities with integrated routines for geometry optimizations and Monte Carlo simulations. These functionalities allow it to work as a stand-alone tool. We discuss some particular development aspects, as well as the overall feasibility of using computational chemistry software packages in mobile platforms. Benchmark calculations show that through efficient implementation techniques even hand-held devices can be used to simulate midsized systems using force fields.

  2. Automation photometer of Hitachi U–2000 spectrophotometer with RS–232C–based computer

    PubMed Central

    Kumar, K. Senthil; Lakshmi, B. S.; Pennathur, Gautam

    1998-01-01

    The interfacing of a commonly used spectrophotometer, the Hitachi U2000, through its RS–232C port to a IBM compatible computer is described. The hardware for data acquisation was designed by suitably modifying readily available materials, and the software was written using the C programming language. The various steps involved in these procedures are elucidated in detail. The efficacy of the procedure was tested experimentally by running the visible spectrum of a cyanine dye. The spectrum was plotted through a printer hooked to the computer. The spectrum was also plotted by transforming the abscissa to the wavenumber scale. This was carried out by using another module written in C. The efficiency of the whole set-up has been calculated using standard procedures. PMID:18924834

  3. Electromagnetic Compatibility Design of the Computer Circuits

    NASA Astrophysics Data System (ADS)

    Zitai, Hong

    2018-02-01

    Computers and the Internet have gradually penetrated into every aspect of people’s daily work. But with the improvement of electronic equipment as well as electrical system, the electromagnetic environment becomes much more complex. Electromagnetic interference has become an important factor to hinder the normal operation of electronic equipment. In order to analyse the computer circuit compatible with the electromagnetic compatibility, this paper starts from the computer electromagnetic and the conception of electromagnetic compatibility. And then, through the analysis of the main circuit and system of computer electromagnetic compatibility problems, we can design the computer circuits in term of electromagnetic compatibility. Finally, the basic contents and methods of EMC test are expounded in order to ensure the electromagnetic compatibility of equipment.

  4. Computer program for prediction of capture maneuver probability for an on-off reaction controlled upper stage

    NASA Technical Reports Server (NTRS)

    Knauber, R. N.

    1982-01-01

    A FORTRAN coded computer program which computes the capture transient of a launch vehicle upper stage at the ignition and/or separation event is presented. It is for a single degree-of-freedom on-off reaction jet attitude control system. The Monte Carlo method is used to determine the statistical value of key parameters at the outcome of the event. Aerodynamic and booster induced disturbances, vehicle and control system characteristics, and initial conditions are treated as random variables. By appropriate selection of input data pitch, yaw and roll axes can be analyzed. Transient response of a single deterministic case can be computed. The program is currently set up on a CDC CYBER 175 computer system but is compatible with ANSI FORTRAN computer language. This routine has been used over the past fifteen (15) years for the SCOUT Launch Vehicle and has been run on RECOMP III, IBM 7090, IBM 360/370, CDC6600 and CDC CYBER 175 computers with little modification.

  5. Expert systems identify fossils and manage large paleontological databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beightol, D.S.; Conrad, M.A.

    EXPAL is a computer program permitting creation and maintenance of comprehensive databases in marine paleontology. It is designed to assist specialists and non-specialists. EXPAL includes a powerful expert system based on the morphological descriptors specific to a given group of fossils. The expert system may be used, for example, to describe and automatically identify an unknown specimen. EXPAL was first applied to Dasycladales (Calcareous green algae). Projects are under way for corresponding expert systems and databases on planktonic foraminifers and calpionellids. EXPAL runs on an IBM XT or compatible microcomputer.

  6. SedMob: A mobile application for creating sedimentary logs in the field

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2014-05-01

    SedMob is an open-source, mobile software package for creating sedimentary logs, targeted for use in tablets and smartphones. The user can create an unlimited number of logs, save data from each bed in the log as well as export and synchronize the data with a remote server. SedMob is designed as a mobile interface to SedLog: a free multiplatform package for drawing graphic logs that runs on PC computers. Data entered into SedMob are saved in the CSV file format, fully compatible with SedLog.

  7. KNET - DISTRIBUTED COMPUTING AND/OR DATA TRANSFER PROGRAM

    NASA Technical Reports Server (NTRS)

    Hui, J.

    1994-01-01

    KNET facilitates distributed computing between a UNIX compatible local host and a remote host which may or may not be UNIX compatible. It is capable of automatic remote login. That is, it performs on the user's behalf the chore of handling host selection, user name, and password to the designated host. Once the login has been successfully completed, the user may interactively communicate with the remote host. Data output from the remote host may be directed to the local screen, to a local file, and/or to a local process. Conversely, data input from the keyboard, a local file, or a local process may be directed to the remote host. KNET takes advantage of the multitasking and terminal mode control features of the UNIX operating system. A parent process is used as the upper layer for interfacing with the local user. A child process is used for a lower layer for interfacing with the remote host computer, and optionally one or more child processes can be used for the remote data output. Output may be directed to the screen and/or to the local processes under the control of a data pipe switch. In order for KNET to operate, the local and remote hosts must observe a common communications protocol. KNET is written in ANSI standard C-language for computers running UNIX. It has been successfully implemented on several Sun series computers and a DECstation 3100 and used to run programs remotely on VAX VMS and UNIX based computers. It requires 100K of RAM under SunOS and 120K of RAM under DEC RISC ULTRIX. An electronic copy of the documentation is provided on the distribution medium. The standard distribution medium for KNET is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. KNET was developed in 1991 and is a copyrighted work with all copyright vested in NASA. UNIX is a registered trademark of AT&T Bell Laboratories. Sun and SunOS are trademarks of Sun Microsystems, Inc. DECstation, VAX, VMS, and ULTRIX are trademarks of Digital Equipment Corporation.

  8. MATH77 - A LIBRARY OF MATHEMATICAL SUBPROGRAMS FOR FORTRAN 77, RELEASE 4.0

    NASA Technical Reports Server (NTRS)

    Lawson, C. L.

    1994-01-01

    MATH77 is a high quality library of ANSI FORTRAN 77 subprograms implementing contemporary algorithms for the basic computational processes of science and engineering. The portability of MATH77 meets the needs of present-day scientists and engineers who typically use a variety of computing environments. Release 4.0 of MATH77 contains 454 user-callable and 136 lower-level subprograms. Usage of the user-callable subprograms is described in 69 sections of the 416 page users' manual. The topics covered by MATH77 are indicated by the following list of chapter titles in the users' manual: Mathematical Functions, Pseudo-random Number Generation, Linear Systems of Equations and Linear Least Squares, Matrix Eigenvalues and Eigenvectors, Matrix Vector Utilities, Nonlinear Equation Solving, Curve Fitting, Table Look-Up and Interpolation, Definite Integrals (Quadrature), Ordinary Differential Equations, Minimization, Polynomial Rootfinding, Finite Fourier Transforms, Special Arithmetic , Sorting, Library Utilities, Character-based Graphics, and Statistics. Besides subprograms that are adaptations of public domain software, MATH77 contains a number of unique packages developed by the authors of MATH77. Instances of the latter type include (1) adaptive quadrature, allowing for exceptional generality in multidimensional cases, (2) the ordinary differential equations solver used in spacecraft trajectory computation for JPL missions, (3) univariate and multivariate table look-up and interpolation, allowing for "ragged" tables, and providing error estimates, and (4) univariate and multivariate derivative-propagation arithmetic. MATH77 release 4.0 is a subroutine library which has been carefully designed to be usable on any computer system that supports the full ANSI standard FORTRAN 77 language. It has been successfully implemented on a CRAY Y/MP computer running UNICOS, a UNISYS 1100 computer running EXEC 8, a DEC VAX series computer running VMS, a Sun4 series computer running SunOS, a Hewlett-Packard 720 computer running HP-UX, a Macintosh computer running MacOS, and an IBM PC compatible computer running MS-DOS. Accompanying the library is a set of 196 "demo" drivers that exercise all of the user-callable subprograms. The FORTRAN source code for MATH77 comprises 109K lines of code in 375 files with a total size of 4.5Mb. The demo drivers comprise 11K lines of code and 418K. Forty-four percent of the lines of the library code and 29% of those in the demo code are comment lines. The standard distribution medium for MATH77 is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 9track 1600 BPI magnetic tape in VAX BACKUP format and a TK50 tape cartridge in VAX BACKUP format. An electronic copy of the documentation is included on the distribution media. Previous releases of MATH77 have been used over a number of years in a variety of JPL applications. MATH77 Release 4.0 was completed in 1992. MATH77 is a copyrighted work with all copyright vested in NASA.

  9. Program helps quickly calculate deviated well path

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, M.P.

    1993-11-22

    A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less

  10. Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing

    NASA Astrophysics Data System (ADS)

    Tang, Jingyin; Matyas, Corene J.

    2018-02-01

    Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.

  11. The Top Six Compatibles: A Closer Look at the Machines That Are Most Compatible with the IBM PC.

    ERIC Educational Resources Information Center

    McMullen, Barbara E.; And Others

    1984-01-01

    Reviews six operationally compatible microcomputers that are most able to run IBM software without modifications--Compaq, Columbia, Corona, Hyperion, Eagle PC, and Chameleon. Information given for each includes manufacturer, uses, standard features, base list price, typical system price, and options and accessories. (MBR)

  12. LCFM - LIVING COLOR FRAME MAKER: PC GRAPHICS GENERATION AND MANAGEMENT TOOL FOR REAL-TIME APPLICATIONS

    NASA Technical Reports Server (NTRS)

    Truong, L. V.

    1994-01-01

    Computer graphics are often applied for better understanding and interpretation of data under observation. These graphics become more complicated when animation is required during "run-time", as found in many typical modern artificial intelligence and expert systems. Living Color Frame Maker is a solution to many of these real-time graphics problems. Living Color Frame Maker (LCFM) is a graphics generation and management tool for IBM or IBM compatible personal computers. To eliminate graphics programming, the graphic designer can use LCFM to generate computer graphics frames. The graphical frames are then saved as text files, in a readable and disclosed format, which can be easily accessed and manipulated by user programs for a wide range of "real-time" visual information applications. For example, LCFM can be implemented in a frame-based expert system for visual aids in management of systems. For monitoring, diagnosis, and/or controlling purposes, circuit or systems diagrams can be brought to "life" by using designated video colors and intensities to symbolize the status of hardware components (via real-time feedback from sensors). Thus status of the system itself can be displayed. The Living Color Frame Maker is user friendly with graphical interfaces, and provides on-line help instructions. All options are executed using mouse commands and are displayed on a single menu for fast and easy operation. LCFM is written in C++ using the Borland C++ 2.0 compiler for IBM PC series computers and compatible computers running MS-DOS. The program requires a mouse and an EGA/VGA display. A minimum of 77K of RAM is also required for execution. The documentation is provided in electronic form on the distribution medium in WordPerfect format. A sample MS-DOS executable is provided on the distribution medium. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. The contents of the diskette are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. The Living Color Frame Maker tool was developed in 1992.

  13. COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Smith, J. P.

    1994-01-01

    The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.

  14. COMPPAP - COMPOSITE PLATE BUCKLING ANALYSIS PROGRAM (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Smith, J. P.

    1994-01-01

    The Composite Plate Buckling Analysis Program (COMPPAP) was written to help engineers determine buckling loads of orthotropic (or isotropic) irregularly shaped plates without requiring hand calculations from design curves or extensive finite element modeling. COMPPAP is a one element finite element program that utilizes high-order displacement functions. The high order of the displacement functions enables the user to produce results more accurate than traditional h-finite elements. This program uses these high-order displacement functions to perform a plane stress analysis of a general plate followed by a buckling calculation based on the stresses found in the plane stress solution. The current version assumes a flat plate (constant thickness) subject to a constant edge load (normal or shear) on one or more edges. COMPPAP uses the power method to find the eigenvalues of the buckling problem. The power method provides an efficient solution when only one eigenvalue is desired. Once the eigenvalue is found, the eigenvector, which corresponds to the plate buckling mode shape, results as a by-product. A positive feature of the power method is that the dominant eigenvalue is the first found, which is this case is the plate buckling load. The reported eigenvalue expresses a load factor to induce plate buckling. COMPPAP is written in ANSI FORTRAN 77. Two machine versions are available from COSMIC: a PC version (MSC-22428), which is for IBM PC 386 series and higher computers and compatibles running MS-DOS; and a UNIX version (MSC-22286). The distribution medium for both machine versions includes source code for both single and double precision versions of COMPPAP. The PC version includes source code which has been optimized for implementation within DOS memory constraints as well as sample executables for both the single and double precision versions of COMPPAP. The double precision versions of COMPPAP have been successfully implemented on an IBM PC 386 compatible running MS-DOS, a Sun4 series computer running SunOS, an HP-9000 series computer running HP-UX, and a CRAY X-MP series computer running UNICOS. COMPPAP requires 1Mb of RAM and the BLAS and LINPACK math libraries, which are included on the distribution medium. The COMPPAP documentation provides instructions for using the commercial post-processing package PATRAN for graphical interpretation of COMPPAP output. The UNIX version includes two electronic versions of the documentation: one in LaTex format and one in PostScript format. The standard distribution medium for the PC version (MSC-22428) is a 5.25 inch 1.2Mb MS-DOS format diskette. The standard distribution medium for the UNIX version (MSC-22286) is a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. COMPPAP was developed in 1992.

  15. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lichtner, Peter C.; Hammond, Glenn E.; Lu, Chuan

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Writtenmore » in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 2 32 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.« less

  16. Strategy for an Extensible Microcomputer-Based Mumps System for Private Practice

    PubMed Central

    Walters, Richard F.; Johnson, Stephen L.

    1979-01-01

    A macro expander technique has been adopted to generate a machine independent single user version of ANSI Standard MUMPS running on an 8080 Microcomputer. This approach makes it possible to have the medically oriented MUMPS language available on inexpensive systems suitable for small group practice settings. Substitution of another macro expansion set allows the same interpreter to be implemented on another computer, thereby providing compatibility with comparable or larger scale systems. Furthermore, since the global file handler can be separated from the interpreter, this approach permits development of a distributed MUMPS system with no change in applications software.

  17. A Linux Workstation for High Performance Graphics

    NASA Technical Reports Server (NTRS)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  18. TFSSRA - THICK FREQUENCY SELECTIVE SURFACE WITH RECTANGULAR APERTURES

    NASA Technical Reports Server (NTRS)

    Chen, J. C.

    1994-01-01

    Thick Frequency Selective Surface with Rectangular Apertures (TFSSRA) was developed to calculate the scattering parameters for a thick frequency selective surface with rectangular apertures on a skew grid at oblique angle of incidence. The method of moments is used to transform the integral equation into a matrix equation suitable for evaluation on a digital computer. TFSSRA predicts the reflection and transmission characteristics of a thick frequency selective surface for both TE and TM orthogonal linearly polarized plane waves. A model of a half-space infinite array is used in the analysis. A complete set of basis functions with unknown coefficients is developed for the waveguide region (waveguide modes) and for the free space region (Floquet modes) in order to represent the electromagnetic fields. To ensure the convergence of the solutions, the number of waveguide modes is adjustable. The method of moments is used to compute the unknown mode coefficients. Then, the scattering matrix of the half-space infinite array is calculated. Next, the reference plane of the scattering matrix is moved half a plate thickness in the negative z-direction, and a frequency selective surface of finite thickness is synthesized by positioning two plates of half-thickness back-to-back. The total scattering matrix is obtained by cascading the scattering matrices of the two half-space infinite arrays. TFSSRA is written in FORTRAN 77 with single precision. It has been successfully implemented on a Sun4 series computer running SunOS, an IBM PC compatible running MS-DOS, and a CRAY series computer running UNICOS, and should run on other systems with slight modifications. Double precision is recommended for running on a PC if many modes are used or if high accuracy is required. This package requires the LINPACK math library, which is included. TFSSRA requires 1Mb of RAM for execution. The standard distribution medium for this program is one 5.25 inch 360K MS-DOS format diskette. It is also available on a .25 inch streaming magnetic tape cartridge (Sun QIC-24) in UNIX tar format. This program was developed in 1992 and is a copyrighted work with all copyright vested in NASA.

  19. Alcator C-Mod Digital Plasma Control System

    NASA Astrophysics Data System (ADS)

    Wolfe, S. M.

    2005-10-01

    A new digital plasma control system (DPCS) has been implemented for Alcator C-Mod. The new system was put into service at the start of the 2005 run campaign and has been in routine operation since. The system consists of two 64-input, 16-output cPCI digitizers attached to a rack-mounted single-CPU Linux server, which performs both the I/O and the computation. During initial operation, the system was set up to directly emulate the original C-Mod ``Hybrid'' MIMO linear control system. Compatibility with the previous control system allows the existing user interface software and data structures to be used with the new hardware. The control program is written in IDL and runs under standard Linux. Interrupts are disabled during the plasma pulses to achieve real-time operation. A synchronous loop is executed with a nominal cycle rate of 10 kHz. Emulation of the original linear control algorithms requires 50 μsec per iteration, with the time evenly split between I/O and computation, so rates of about 20 KHz are achievable. Reliable vertical position control has been demonstrated with cycle rates as low as 5 KHz. Additional computations, including non-linear algorithms and adaptive response, are implemented as optional procedure calls within the main real-time loop.

  20. Microcomputer pollution model for civilian airports and Air Force bases. Model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Segal, H.M.; Hamilton, P.L.

    1988-08-01

    This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less

  1. A Multi-purpose Brain-Computer Interface Output Device

    PubMed Central

    Thompson, David E; Huggins, Jane E

    2012-01-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as standalone communication and control systems, rather than as interfaces to existing systems built for these purposes. While an individual communication and control system may be powerful or flexible, no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCIs could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e. without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems. PMID:22208120

  2. A multi-purpose brain-computer interface output device.

    PubMed

    Thompson, David E; Huggins, Jane E

    2011-10-01

    While brain-computer interfaces (BCIs) are a promising alternative access pathway for individuals with severe motor impairments, many BCI systems are designed as stand-alone communication and control systems, rather than as interfaces to existing systems built for these purposes. An individual communication and control system may be powerful or flexible, but no single system can compete with the variety of options available in the commercial assistive technology (AT) market. BCls could instead be used as an interface to these existing AT devices and products, which are designed for improving access and agency of people with disabilities and are highly configurable to individual user needs. However, interfacing with each AT device and program requires significant time and effort on the part of researchers and clinicians. This work presents the Multi-Purpose BCI Output Device (MBOD), a tool to help researchers and clinicians provide BCI control of many forms of AT in a plug-and-play fashion, i.e., without the installation of drivers or software on the AT device, and a proof-of-concept of the practicality of such an approach. The MBOD was designed to meet the goals of target device compatibility, BCI input device compatibility, convenience, and intuitive command structure. The MBOD was successfully used to interface a BCI with multiple AT devices (including two wheelchair seating systems), as well as computers running Windows (XP and 7), Mac and Ubuntu Linux operating systems.

  3. Elfin: An algorithm for the computational design of custom three-dimensional structures from modular repeat protein building blocks.

    PubMed

    Yeh, Chun-Ting; Brunette, T J; Baker, David; McIntosh-Smith, Simon; Parmeggiani, Fabio

    2018-02-01

    Computational protein design methods have enabled the design of novel protein structures, but they are often still limited to small proteins and symmetric systems. To expand the size of designable proteins while controlling the overall structure, we developed Elfin, a genetic algorithm for the design of novel proteins with custom shapes using structural building blocks derived from experimentally verified repeat proteins. By combining building blocks with compatible interfaces, it is possible to rapidly build non-symmetric large structures (>1000 amino acids) that match three-dimensional geometric descriptions provided by the user. A run time of about 20min on a laptop computer for a 3000 amino acid structure makes Elfin accessible to users with limited computational resources. Protein structures with controlled geometry will allow the systematic study of the effect of spatial arrangement of enzymes and signaling molecules, and provide new scaffolds for functional nanomaterials. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.

    1989-01-01

    A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.

  5. Space-Shuttle Emulator Software

    NASA Technical Reports Server (NTRS)

    Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram; hide

    2007-01-01

    A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.

  6. TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1

    NASA Technical Reports Server (NTRS)

    Bellenot, S. F.

    1994-01-01

    The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  7. Program For Evaluation Of Reliability Of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, N.; Janosik, L. A.; Gyekenyesi, J. P.; Powers, Lynn M.

    1996-01-01

    CARES/LIFE predicts probability of failure of monolithic ceramic component as function of service time. Assesses risk that component fractures prematurely as result of subcritical crack growth (SCG). Effect of proof testing of components prior to service also considered. Coupled to such commercially available finite-element programs as ANSYS, ABAQUS, MARC, MSC/NASTRAN, and COSMOS/M. Also retains all capabilities of previous CARES code, which includes estimation of fast-fracture component reliability and Weibull parameters from inert strength (without SCG contributing to failure) specimen data. Estimates parameters that characterize SCG from specimen data as well. Written in ANSI FORTRAN 77 to be machine-independent. Program runs on any computer in which sufficient addressable memory (at least 8MB) and FORTRAN 77 compiler available. For IBM-compatible personal computer with minimum 640K memory, limited program available (CARES/PC, COSMIC number LEW-15248).

  8. Muons in the CMS High Level Trigger System

    NASA Astrophysics Data System (ADS)

    Verwilligen, Piet; CMS Collaboration

    2016-04-01

    The trigger systems of LHC detectors play a fundamental role in defining the physics capabilities of the experiments. A reduction of several orders of magnitude in the rate of collected events, with respect to the proton-proton bunch crossing rate generated by the LHC, is mandatory to cope with the limits imposed by the readout and storage system. An accurate and efficient online selection mechanism is thus required to fulfill the task keeping maximal the acceptance to physics signals. The CMS experiment operates using a two-level trigger system. Firstly a Level-1 Trigger (L1T) system, implemented using custom-designed electronics, is designed to reduce the event rate to a limit compatible to the CMS Data Acquisition (DAQ) capabilities. A High Level Trigger System (HLT) follows, aimed at further reducing the rate of collected events finally stored for analysis purposes. The latter consists of a streamlined version of the CMS offline reconstruction software and operates on a computer farm. It runs algorithms optimized to make a trade-off between computational complexity, rate reduction and high selection efficiency. With the computing power available in 2012 the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. An efficient selection of muons at HLT, as well as an accurate measurement of their properties, such as transverse momentum and isolation, is fundamental for the CMS physics programme. The performance of the muon HLT for single and double muon triggers achieved in Run I will be presented. Results from new developments, aimed at improving the performance of the algorithms for the harsher scenarios of collisions per event (pile-up) and luminosity expected for Run II will also be discussed.

  9. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    NASA Astrophysics Data System (ADS)

    Loring, B.; Karimabadi, H.; Rortershteyn, V.

    2015-10-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not. We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.

  10. A Screen Space GPGPU Surface LIC Algorithm for Distributed Memory Data Parallel Sort Last Rendering Infrastructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loring, Burlen; Karimabadi, Homa; Rortershteyn, Vadim

    2014-07-01

    The surface line integral convolution(LIC) visualization technique produces dense visualization of vector fields on arbitrary surfaces. We present a screen space surface LIC algorithm for use in distributed memory data parallel sort last rendering infrastructures. The motivations for our work are to support analysis of datasets that are too large to fit in the main memory of a single computer and compatibility with prevalent parallel scientific visualization tools such as ParaView and VisIt. By working in screen space using OpenGL we can leverage the computational power of GPUs when they are available and run without them when they are not.more » We address efficiency and performance issues that arise from the transformation of data from physical to screen space by selecting an alternate screen space domain decomposition. We analyze the algorithm's scaling behavior with and without GPUs on two high performance computing systems using data from turbulent plasma simulations.« less

  11. ZOOM: a generic personal computer-based teaching program for public health and its application in schistosomiasis control.

    PubMed Central

    Martin, G. T.; Yoon, S. S.; Mott, K. E.

    1991-01-01

    Schistosomiasis, a group of parasitic diseases caused by Schistosoma parasites, is associated with water resources development and affects more than 200 million people in 76 countries. Depending on the species of parasite involved, disease of the liver, spleen, gastrointestinal or urinary tract, or kidneys may result. A computer-assisted teaching package has been developed by WHO for use in the training of public health workers involved in schistosomiasis control. The package consists of the software, ZOOM, and a schistosomiasis information file, Dr Schisto, and uses hypermedia technology to link pictures and text. ZOOM runs on the IBM-PC and IBM-compatible computers, is user-friendly, requires a minimal hardware configuration, and can interact with the user in English, French, Spanish or Portuguese. The information files for ZOOM can be created or modified by the instructor using a word processor, and thus can be designed to suit the need of students. No programming knowledge is required to create the stacks. PMID:1786618

  12. Generation and physical characteristics of the ERTS MSS system corrected computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1973-01-01

    The generation and format are discussed of the ERTS system corrected multispectral scanner computer compatible tapes. The discussion includes spacecraft sensors, scene characteristics, data transmission, and conversion of data to computer compatible tapes at the NASA Data Processing Facility. Geometeric and radiometric corrections, tape formats, and the physical characteristics of the tapes are also included.

  13. External audio for IBM-compatible computers

    NASA Technical Reports Server (NTRS)

    Washburn, David A.

    1992-01-01

    Numerous applications benefit from the presentation of computer-generated auditory stimuli at points discontiguous with the computer itself. Modification of an IBM-compatible computer for use of an external speaker is relatively easy but not intuitive. This modification is briefly described.

  14. Internet Distribution of Spacecraft Telemetry Data

    NASA Technical Reports Server (NTRS)

    Specht, Ted; Noble, David

    2006-01-01

    Remote Access Multi-mission Processing and Analysis Ground Environment (RAMPAGE) is a Java-language server computer program that enables near-real-time display of spacecraft telemetry data on any authorized client computer that has access to the Internet and is equipped with Web-browser software. In addition to providing a variety of displays of the latest available telemetry data, RAMPAGE can deliver notification of an alarm by electronic mail. Subscribers can then use RAMPAGE displays to determine the state of the spacecraft and formulate a response to the alarm, if necessary. A user can query spacecraft mission data in either binary or comma-separated-value format by use of a Web form or a Practical Extraction and Reporting Language (PERL) script to automate the query process. RAMPAGE runs on Linux and Solaris server computers in the Ground Data System (GDS) of NASA's Jet Propulsion Laboratory and includes components designed specifically to make it compatible with legacy GDS software. The client/server architecture of RAMPAGE and the use of the Java programming language make it possible to utilize a variety of competitive server and client computers, thereby also helping to minimize costs.

  15. Computational Analysis of Dynamic SPK(S8)-JP8 Fueled Combustor-Sector Performance

    NASA Technical Reports Server (NTRS)

    Ryder, R.; Hendricks, Roberts C.; Huber, M. L.; Shouse, D. T.

    2010-01-01

    Civil and military flight tests using blends of synthetic and biomass fueling with jet fuel up to 50:50 are currently considered as "drop-in" fuels. They are fully compatible with aircraft performance, emissions and fueling systems, yet the design and operations of such fueling systems and combustors must be capable of running fuels from a range of feedstock sources. This paper provides Smart Combustor or Fuel Flexible Combustor designers with computational tools, preliminary performance, emissions and particulates combustor sector data. The baseline fuel is kerosene-JP-8+100 (military) or Jet A (civil). Results for synthetic paraffinic kerosene (SPK) fuel blends show little change with respect to baseline performance, yet do show lower emissions. The evolution of a validated combustor design procedure is fundamental to the development of dynamic fueling of combustor systems for gas turbine engines that comply with multiple feedstock sources satisfying both new and legacy systems.

  16. A computationally efficient modelling of laminar separation bubbles

    NASA Technical Reports Server (NTRS)

    Maughmer, Mark D.

    1988-01-01

    The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.

  17. A General Simulator Using State Estimation for a Space Tug Navigation System. [computerized simulation, orbital position estimation and flight mechanics

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1975-01-01

    A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.

  18. OpenKIM - Building a Knowledgebase of Interatomic Models

    NASA Astrophysics Data System (ADS)

    Bierbaum, Matthew; Tadmor, Ellad; Elliott, Ryan; Wennblom, Trevor; Alemi, Alexander; Chen, Yan-Jiun; Karls, Daniel; Ludvik, Adam; Sethna, James

    2014-03-01

    The Knowledgebase of Interatomic Models (KIM) is an effort by the computational materials community to provide a standard interface for the development, characterization, and use of interatomic potentials. The KIM project has developed an API between simulation codes and interatomic models written in several different languages including C, Fortran, and Python. This interface is already supported in popular simulation environments such as LAMMPS and ASE, giving quick access to over a hundred compatible potentials that have been contributed so far. To compare and characterize models, we have developed a computational processing pipeline which automatically runs a series of tests for each model in the system, such as phonon dispersion relations and elastic constant calculations. To view the data from these tests, we created a rich set of interactive visualization tools located online. Finally, we created a Web repository to store and share these potentials, tests, and visualizations which can be found at https://openkim.org along with futher information.

  19. NLM microcomputer-based tutorials (for microcomputers). Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, M.

    1990-04-01

    The package consists of TOXLEARN--a microcomputer-based training package for TOXLINE (Toxicology Information Online), CHEMLEARN-a microcomputer-based training package for CHEMLINE (Chemical Information Online), MEDTUTOR--a microcomputer-based training package for MEDLINE (Medical Information Online), and ELHILL LEARN--a microcomputer-based training package for the ELHILL search and retrieval software that supports the above-mentioned databases...Software Description: The programs were developed under PILOTplus using the NLM LEARN Programmer. They run on IBM-PC, XT, AT, PS/2, and fully compatible computers. The programs require 512K RAM memory, one disk drive, and DOS 2.0 or higher. The software supports most monochrome, color graphics, enhanced color graphics, or visual graphics displays.

  20. AQUIS: A PC-based air inventory and permit manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1992-01-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate and track sources, emissions, stacks, permits, and related information. The system runs on IBM-compatible personal computers with dBASE IV and tracks more than 1,200 data items distributed among various source categories. AQUIS is currently operating at nine US Air Force facilities that have up to 1,000 sources. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user-specified information. In addition to six criteria pollutants, AQUIS calculates compound-specific emissions and allows users to enter their own emissionmore » estimates.« less

  1. Low Cost Night Vision System for Intruder Detection

    NASA Astrophysics Data System (ADS)

    Ng, Liang S.; Yusoff, Wan Azhar Wan; R, Dhinesh; Sak, J. S.

    2016-02-01

    The growth in production of Android devices has resulted in greater functionalities as well as lower costs. This has made previously more expensive systems such as night vision affordable for more businesses and end users. We designed and implemented robust and low cost night vision systems based on red-green-blue (RGB) colour histogram for a static camera as well as a camera on an unmanned aerial vehicle (UAV), using OpenCV library on Intel compatible notebook computers, running Ubuntu Linux operating system, with less than 8GB of RAM. They were tested against human intruders under low light conditions (indoor, outdoor, night time) and were shown to have successfully detected the intruders.

  2. ARCAS (ACACIA Regional Climate-data Access System) -- a Web Access System for Climate Model Data Access, Visualization and Comparison

    NASA Astrophysics Data System (ADS)

    Hakkarinen, C.; Brown, D.; Callahan, J.; hankin, S.; de Koningh, M.; Middleton-Link, D.; Wigley, T.

    2001-05-01

    A Web-based access system to climate model output data sets for intercomparison and analysis has been produced, using the NOAA-PMEL developed Live Access Server software as host server and Ferret as the data serving and visualization engine. Called ARCAS ("ACACIA Regional Climate-data Access System"), and publicly accessible at http://dataserver.ucar.edu/arcas, the site currently serves climate model outputs from runs of the NCAR Climate System Model for the 21st century, for Business as Usual and Stabilization of Greenhouse Gas Emission scenarios. Users can select, download, and graphically display single variables or comparisons of two variables from either or both of the CSM model runs, averaged for monthly, seasonal, or annual time resolutions. The time length of the averaging period, and the geographical domain for download and display, are fully selectable by the user. A variety of arithmetic operations on the data variables can be computed "on-the-fly", as defined by the user. Expansions of the user-selectable options for defining analysis options, and for accessing other DOD-compatible ("Distributed Ocean Data System-compatible") data sets, residing at locations other than the NCAR hardware server on which ARCAS operates, are planned for this year. These expansions are designed to allow users quick and easy-to-operate web-based access to the largest possible selection of climate model output data sets available throughout the world.

  3. Program package for multicanonical simulations of U(1) lattice gauge theory-Second version

    NASA Astrophysics Data System (ADS)

    Bazavov, Alexei; Berg, Bernd A.

    2013-03-01

    A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.

  4. Active local control of propeller-aircraft run-up noise.

    PubMed

    Hodgson, Murray; Guo, Jingnan; Germain, Pierre

    2003-12-01

    Engine run-ups are part of the regular maintenance schedule at Vancouver International Airport. The noise generated by the run-ups propagates into neighboring communities, disturbing the residents. Active noise control is a potentially cost-effective alternative to passive methods, such as enclosures. Propeller aircraft generate low-frequency tonal noise that is highly compatible with active control. This paper presents a preliminary investigation of the feasibility and effectiveness of controlling run-up noise from propeller aircraft using local active control. Computer simulations for different configurations of multi-channel active-noise-control systems, aimed at reducing run-up noise in adjacent residential areas using a local-control strategy, were performed. These were based on an optimal configuration of a single-channel control system studied previously. The variations of the attenuation and amplification zones with the number of control channels, and with source/control-system geometry, were studied. Here, the aircraft was modeled using one or two sources, with monopole or multipole radiation patterns. Both free-field and half-space conditions were considered: for the configurations studied, results were similar in the two cases. In both cases, large triangular quiet zones, with local attenuations of 10 dB or more, were obtained when nine or more control channels were used. Increases of noise were predicted outside of these areas, but these were minimized as more control channels were employed. By combining predicted attenuations with measured noise spectra, noise levels after implementation of an active control system were estimated.

  5. Generation and physical characteristics of the LANDSAT-1, -2 and -3 MSS computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1977-01-01

    The generation and format of the LANDSAT 1, 2, and 3 system corrected multispectral scanner computer compatible tapes are discussed. Included in the discussion are the spacecraft sensors, scene characteristics, the transmission of data, and the conversion of the data to computer compatible tapes. Also included in the discussion are geometric and radiometric corrections, tape formats, and the physical characteristics of the tape.

  6. Generation and physical characteristics of the Landsat 1 and 2 MSS computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Thomas, V. L.

    1975-01-01

    The generation and format is discussed of the Landsat 1 and 2 system corrected multispectral scanner computer compatible tapes. Included in the discussion are the spacecraft sensors, scene characteristics, the transmission of data, and the conversion of the data to computer compatible tapes at the NASA Data Processing Facility. Geometric and radiometric corrections, tape formats, and the physical characteristics of the tape are also described.

  7. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  8. On Software Compatibility.

    ERIC Educational Resources Information Center

    Ershov, Andrei P.

    The problem of compatibility of software hampers the development of computer application. One solution lies in standardization of languages, terms, peripherais, operating systems and computer characteristics. (AB)

  9. LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite

    NASA Astrophysics Data System (ADS)

    Crowley, Kevin D.

    1993-05-01

    The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.

  10. The Cloud-Based Integrated Data Viewer (IDV)

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2015-04-01

    Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While there are a suite of tools and methodologies used in traditional software engineering environments to mitigate this issue, they are typically ignored by developers lacking a background in software engineering. The result is a large body of software which is simultaneously critical and difficult to maintain. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. The advent of cloud computing has provided a solution to this problem, which was not previously practical on a large scale; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. Through application streaming we are able to bring the same visualization to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be. Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved. We will also discuss the differences between local software and software-as-a-service.

  11. Turning a cylindrical treadmill with feet: an MR-compatible device for assessment of the neural correlates of lower-limb movement.

    PubMed

    Toyomura, Akira; Yokosawa, Koichi; Shimojo, Atsushi; Fujii, Tetsunoshin; Kuriki, Shinya

    2018-06-17

    Locomotion, which is one of the most basic motor functions, is critical for performing various daily-life activities. Despite its essential function, assessment of brain activity during lower-limb movement is still limited because of the constraints of existing brain imaging methods. Here, we describe an MR-compatible, cylindrical treadmill device that allows participants to perform stepping movements on an MRI scanner table. The device was constructed from wood and all of the parts were handmade by the authors. We confirmed the MR-compatibility of the device by evaluating the temporal signal-to-noise ratio of 64 voxels of a phantom during scanning. Brain activity was measured while twenty participants turned the treadmill with feet in sync with metronome sounds. The rotary speed of the cylinder was encoded by optical fibers. The post/pre-central gyrus and cerebellum showed significant activity during the movements, which was comparable to the activity patterns reported in previous studies. Head movement on the y- and z-axes was influenced more by lower-limb movement than was head movement on the x-axis. Among the 60 runs (3 runs × 20 participants), head movement during two of the runs (3.3%) was excessive due to the lower-limb movement. Compared to MR-compatible devices proposed in the previous studies, the advantage of this device may be simple structure and replicability to realize stepping movement with a supine position. Collectively, our results suggest that the treadmill device is useful for evaluating lower-limb-related neural activity. Copyright © 2018. Published by Elsevier B.V.

  12. PRELIMINARY DESIGN ANALYSIS OF AXIAL FLOW TURBINES

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1994-01-01

    A computer program has been developed for the preliminary design analysis of axial-flow turbines. Rapid approximate generalized procedures requiring minimum input are used to provide turbine overall geometry and performance adequate for screening studies. The computations are based on mean-diameter flow properties and a stage-average velocity diagram. Gas properties are assumed constant throughout the turbine. For any given turbine, all stages, except the first, are specified to have the same shape velocity diagram. The first stage differs only in the value of inlet flow angle. The velocity diagram shape depends upon the stage work factor value and the specified type of velocity diagram. Velocity diagrams can be specified as symmetrical, zero exit swirl, or impulse; or by inputting stage swirl split. Exit turning vanes can be included in the design. The 1991 update includes a generalized velocity diagram, a more flexible meanline path, a reheat model, a radial component of velocity, and a computation of free-vortex hub and tip velocity diagrams. Also, a loss-coefficient calibration was performed to provide recommended values for airbreathing engine turbines. Input design requirements include power or pressure ratio, mass flow rate, inlet temperature and pressure, and rotative speed. The design variables include inlet and exit diameters, stator angle or exit radius ratio, and number of stages. Gas properties are input as gas constant, specific heat ratio, and viscosity. The program output includes inlet and exit annulus dimensions, exit temperature and pressure, total and static efficiencies, flow angles, blading angles, and last stage absolute and relative Mach numbers. This program is written in FORTRAN 77 and can be ported to any computer with a standard FORTRAN compiler which supports NAMELIST. It was originally developed on an IBM 7000 series computer running VM and has been implemented on IBM PC computers and compatibles running MS-DOS under Lahey FORTRAN, and DEC VAX series computers running VMS. Format statements in the code may need to be rewritten depending on your FORTRAN compiler. The source code and sample data are available on a 5.25 inch 360K MS-DOS format diskette. This program was developed in 1972 and was last updated in 1991. IBM and IBM PC are registered trademarks of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. DEC VAX, and VMS are trademarks of Digital Equipment Corporation.

  13. AQUIS: A PC-based source information manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1993-05-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less

  14. AQUIS: A PC-based source information manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1993-01-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less

  15. Biocellion: accelerating computer simulation of multicellular biological system models.

    PubMed

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. ELM - A SIMPLE TOOL FOR THERMAL-HYDRAULIC ANALYSIS OF SOLID-CORE NUCLEAR ROCKET FUEL ELEMENTS

    NASA Technical Reports Server (NTRS)

    Walton, J. T.

    1994-01-01

    ELM is a simple computational tool for modeling the steady-state thermal-hydraulics of propellant flow through fuel element coolant channels in nuclear thermal rockets. Written for the nuclear propulsion project of the Space Exploration Initiative, ELM evaluates the various heat transfer coefficient and friction factor correlations available for turbulent pipe flow with heat addition. In the past, these correlations were found in different reactor analysis codes, but now comparisons are possible within one program. The logic of ELM is based on the one-dimensional conservation of energy in combination with Newton's Law of Cooling to determine the bulk flow temperature and the wall temperature across a control volume. Since the control volume is an incremental length of tube, the corresponding pressure drop is determined by application of the Law of Conservation of Momentum. The size, speed, and accuracy of ELM make it a simple tool for use in fuel element parametric studies. ELM is a machine independent program written in FORTRAN 77. It has been successfully compiled on an IBM PC compatible running MS-DOS using Lahey FORTRAN 77, a DEC VAX series computer running VMS, and a Sun4 series computer running SunOS UNIX. ELM requires 565K of RAM under SunOS 4.1, 360K of RAM under VMS 5.4, and 406K of RAM under MS-DOS. Because this program is machine independent, no executable is provided on the distribution media. The standard distribution medium for ELM is one 5.25 inch 360K MS-DOS format diskette. ELM was developed in 1991. DEC, VAX, and VMS are trademarks of Digital Equipment Corporation. Sun4 and SunOS are trademarks of Sun Microsystems, Inc. IBM PC is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation.

  17. PCB tester selection for future systems. Volume 2. Final report, August 1989-June 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitt, W.

    1992-06-01

    This report describes a computer program (to run on an IBM compatible PC) designed to aid in the selection of a PCB tester, given the characteristics of the PC board to be tested. The program contains a limited data base of PCB testers, and others may be added easily. This report also provides a specification for a limited family of PCB testers to fill the gap between what the U.S. Air Force is expected to need and what is expected to be available within the next four to six years. The parameters used in the computer program and the specificationmore » are based on a survey of military and commercial PCBs - both those now available and those expected to come on line within the next four to six years. The results of the survey are covered in volume 2 - available from DTIC. Automatic Test Equipment, Technology Forecast, Air Force Avionics.« less

  18. PCB tester selection for future systems. Volume 1. Final report, August 1989-June 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitt, W.

    1992-06-01

    This report describes a computer program (to run on an IBM compatible PC) designed to aid in the selection of a PCB tester, given the characteristics of the PC board to be tested. The program contains a limited data base of PCB testers, and others may be added easily. This report also provides a specification for a limited family of PCB testers to fill the gap between what the U.S. Air Force is expected to need and what is expected to be available within the next four to six years. The parameters used in the computer program and the specificationmore » are based on a survey of military and commercial PCBs - both those now available and those expected to come on line within the next four to six years. The results of the survey are covered in volume 2 - available from DTIC. Automatic Test Equipment, Technology Forecast, Air Force Avionics.« less

  19. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  20. INPE LANDSAT-D thematic mapper computer compatible tape format specification

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Desouza, R. C. M.

    1982-01-01

    The format of the computer compatible tapes (CCT) which contain Thematic Mapper (TM) imagery data acquired from the LANDSAT D and D Prime satellites by the INSTITUTO DE PERSQUISAS ESPACIALS (CNPq-INPE/BRAZIL) is defined.

  1. Gravitational baryogenesis in running vacuum models

    NASA Astrophysics Data System (ADS)

    Oikonomou, V. K.; Pan, Supriya; Nunes, Rafael C.

    2017-08-01

    We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless of whether these models can produce a viable cosmological evolution, we demonstrate that they produce a nonzero baryon-to-entropy ratio even if the universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two well known and most used running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data. Moreover, we also show that the mechanism of gravitational baryogenesis may constrain the running vacuum models.

  2. PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM

    NASA Technical Reports Server (NTRS)

    Roberts, F. E.

    1994-01-01

    The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the Pyrolaser to be set up using the Pyrometer String Transfer macro. It requires no inputs and provides temperature and emissivity as outputs. The Read Continuous Pyrometer program can be run continuously and the data can be sampled as often or as seldom as updates of temperature and emissivity are required. PYROLASER is written using the Labview software for use on Macintosh series computers running System 6.0.3 or later, Sun Sparc series computers running OpenWindows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatibles running Microsoft Windows 3.1 or later. Labview requires a minimum of 5Mb of RAM on a Macintosh, 24Mb of RAM on a Sun, and 8Mb of RAM on an IBM PC or compatible. The Labview software is a product of National Instruments (Austin,TX; 800-433-3488), and is not included with this program. The standard distribution medium for PYROLASER is a 3.5 inch 800K Macintosh format diskette. It is also available on a 3.5 inch 720K MS-DOS format diskette, a 3.5 inch diskette in UNIX tar format, and a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in Macintosh WordPerfect version 2.0.4 format is included on the distribution medium. Printed documentation is included in the price of the program. PYROLASER was developed in 1992.

  3. LABORATORY PROCESS CONTROLLER USING NATURAL LANGUAGE COMMANDS FROM A PERSONAL COMPUTER

    NASA Technical Reports Server (NTRS)

    Will, H.

    1994-01-01

    The complex environment of the typical research laboratory requires flexible process control. This program provides natural language process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing natural language command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

  4. Bringing the Unidata IDV to the Cloud

    NASA Astrophysics Data System (ADS)

    Fisher, W. I.; Oxelson Ganter, J.

    2015-12-01

    Maintaining software compatibility across new computing environments and the associated underlying hardware is a common problem for software engineers and scientific programmers. While traditional software engineering provides a suite of tools and methodologies which may mitigate this issue, they are typically ignored by developers lacking a background in software engineering. Causing further problems, these methodologies are best applied at the start of project; trying to apply them to an existing, mature project can require an immense effort. Visualization software is particularly vulnerable to this problem, given the inherent dependency on particular graphics hardware and software API's. As a result of these issues, there exists a large body of software which is simultaneously critical to the scientists who are dependent upon it, and yet increasingly difficult to maintain.The solution to this problem was partially provided with the advent of Cloud Computing; Application Streaming. This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations, with little-to-no re-engineering required. When coupled with containerization technology such as Docker, we are able to easily bring the same visualization software to a desktop, a netbook, a smartphone, and the next generation of hardware, whatever it may be.Unidata has been able to harness Application Streaming to provide a tablet-compatible version of our visualization software, the Integrated Data Viewer (IDV). This work will examine the challenges associated with adapting the IDV to an application streaming platform, and include a brief discussion of the underlying technologies involved.

  5. Low-Bandwidth and Non-Compute Intensive Remote Identification of Microbes from Raw Sequencing Reads

    PubMed Central

    Gautier, Laurent; Lund, Ole

    2013-01-01

    Cheap DNA sequencing may soon become routine not only for human genomes but also for practically anything requiring the identification of living organisms from their DNA: tracking of infectious agents, control of food products, bioreactors, or environmental samples. We propose a novel general approach to the analysis of sequencing data where a reference genome does not have to be specified. Using a distributed architecture we are able to query a remote server for hints about what the reference might be, transferring a relatively small amount of data. Our system consists of a server with known reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script. Both are able to handle a large number of sequencing reads and from portable devices (the browser-based running on a tablet), perform its task within seconds, and consume an amount of bandwidth compatible with mobile broadband networks. Such client-server approaches could develop in the future, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc. PMID:24391826

  6. Low-bandwidth and non-compute intensive remote identification of microbes from raw sequencing reads.

    PubMed

    Gautier, Laurent; Lund, Ole

    2013-01-01

    Cheap DNA sequencing may soon become routine not only for human genomes but also for practically anything requiring the identification of living organisms from their DNA: tracking of infectious agents, control of food products, bioreactors, or environmental samples. We propose a novel general approach to the analysis of sequencing data where a reference genome does not have to be specified. Using a distributed architecture we are able to query a remote server for hints about what the reference might be, transferring a relatively small amount of data. Our system consists of a server with known reference DNA indexed, and a client with raw sequencing reads. The client sends a sample of unidentified reads, and in return receives a list of matching references. Sequences for the references can be retrieved and used for exhaustive computation on the reads, such as alignment. To demonstrate this approach we have implemented a web server, indexing tens of thousands of publicly available genomes and genomic regions from various organisms and returning lists of matching hits from query sequencing reads. We have also implemented two clients: one running in a web browser, and one as a python script. Both are able to handle a large number of sequencing reads and from portable devices (the browser-based running on a tablet), perform its task within seconds, and consume an amount of bandwidth compatible with mobile broadband networks. Such client-server approaches could develop in the future, allowing a fully automated processing of sequencing data and routine instant quality check of sequencing runs from desktop sequencers. A web access is available at http://tapir.cbs.dtu.dk. The source code for a python command-line client, a server, and supplementary data are available at http://bit.ly/1aURxkc.

  7. IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.

    NASA Astrophysics Data System (ADS)

    Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier

    2017-04-01

    The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.

  8. HANSF 1.3 Users Manual FAI/98-40-R2 Hanford Spent Nuclear Fuel (SNF) Safety Analysis Model [SEC 1 and 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DUNCAN, D.R.

    The HANSF analysis tool is an integrated model considering phenomena inside a multi-canister overpack (MCO) spent nuclear fuel container such as fuel oxidation, convective and radiative heat transfer, and the potential for fission product release. This manual reflects the HANSF version 1.3.2, a revised version of 1.3.1. HANSF 1.3.2 was written to correct minor errors and to allow modeling of condensate flow on the MCO inner surface. HANSF 1.3.2 is intended for use on personal computers such as IBM-compatible machines with Intel processors running under Lahey TI or digital Visual FORTRAN, Version 6.0, but this does not preclude operation inmore » other environments.« less

  9. Computer Aided Drafting Packages for Secondary Education. Edition 2. PC DOS Compatible Programs. A MicroSIFT Quarterly Report.

    ERIC Educational Resources Information Center

    Pollard, Jim

    This report reviews eight IBM-compatible software packages that are available to secondary schools to teach computer-aided drafting (CAD). Software packages to be considered were selected following reviews of CAD periodicals, computers in education periodicals, advertisements, and recommendations of teachers. The packages were then rated by…

  10. NASA Applications for Computational Electromagnetic Analysis

    NASA Technical Reports Server (NTRS)

    Lewis, Catherine C.; Trout, Dawn H.; Krome, Mark E.; Perry, Thomas A.

    2011-01-01

    Computational Electromagnetic Software is used by NASA to analyze the compatibility of systems too large or too complex for testing. Recent advances in software packages and computer capabilities have made it possible to determine the effects of a transmitter inside a launch vehicle fairing, better analyze the environment threats, and perform on-orbit replacements with assured electromagnetic compatibility.

  11. The Linux operating system: An introduction

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.

    1995-01-01

    Linux is a Unix-like operating system for Intel 386/486/Pentium based IBM-PCs and compatibles. The kernel of this operating system was written from scratch by Linus Torvalds and, although copyrighted by the author, may be freely distributed. A world-wide group has collaborated in developing Linux on the Internet. Linux can run the powerful set of compilers and programming tools of the Free Software Foundation, and XFree86, a port of the X Window System from MIT. Most capabilities associated with high performance workstations, such as networking, shared file systems, electronic mail, TeX, LaTeX, etc. are freely available for Linux. It can thus transform cheap IBM-PC compatible machines into Unix workstations with considerable capabilities. The author explains how Linux may be obtained, installed and networked. He also describes some interesting applications for Linux that are freely available. The enormous consumer market for IBM-PC compatible machines continually drives down prices of CPU chips, memory, hard disks, CDROMs, etc. Linux can convert such machines into powerful workstations that can be used for teaching, research and software development. For professionals who use Unix based workstations at work, Linux permits virtually identical working environments on their personal home machines. For cost conscious educational institutions Linux can create world-class computing environments from cheap, easily maintained, PC clones. Finally, for university students, it provides an essentially cost-free path away from DOS into the world of Unix and X Windows.

  12. The use of the bicycle compatibility index in identifying gaps and deficiencies in bicycle networks

    NASA Astrophysics Data System (ADS)

    Ilie, A.; Oprea, C.; Costescu, D.; Roşca, E.; Dinu, O.; Ghionea, F.

    2016-11-01

    Currently, no methodology is widely accepted by engineers, planners, or bicycle coordinators that allow them to determine how compatible a roadway is in providing efficient operation of both bicycles and motor vehicles. Previous studies reported a number of approaches to obtain an appropriate level of service; some authors developed the bicycle level of service (BLOS) and other authors developed the bicycle compatibility indexes (BCI). The level of service (BLOS) for a bicycle route represents an evaluation of safety and commodity perceived by a bicyclist reported to the motorized traffic, while running on the road surface. The bicycle compatibility index (BCI) is used by bicycle coordinators, transportation planners, traffic engineers to evaluate the capability of specific roadways to accommodate both motorists and bicyclists and to plan for and design roadways that are bicycle compatible. After applying BCI and BLOS models for the designed bicycle infrastructure network in the city of Dej, one can see that only few streets are Moderately Low compatible compared to the others with a high degree of compatibility that recommends to include them in the bicycle infrastructure network.

  13. A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.

    2015-12-01

    Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.

  14. Comparison of the results of several heat transfer computer codes when applied to a hypothetical nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claiborne, H.C.; Wagner, R.S.; Just, R.A.

    1979-12-01

    A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or frommore » some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model.« less

  15. Processable Data Making in the Remote Server Sent by Android Phone as a GIS Data Collecting Tool

    NASA Astrophysics Data System (ADS)

    Karaagac, Abdullah; Bostancı, Bulent

    2016-04-01

    Mobile technologies are improving and getting cheaper everyday. Not only smart phones are improved much but also new types of mobile applications and sensors come with the smart phone together. Maps and navigation applications one of the most popular types of applications on these types. Most of these applications uses location services including GNSS, Wi Fi, cellular data and beacon services. Although these coordinate precision not very high, it is appropriate for many applications to utilize. Android is a mobile operating system based on Linux Kernel. It is compatible for varies mobile devices like smart phones, tablets, smart TV's, wearable technologies etc. Android has large capability for application development by using the open source libraries and device sensors like gyroscope, GNSS etc. Android Studio is the most popular integrated development environment (IDE) for Android devices, mainly developing by Google. It had been announced on May 16, 2013 at Google I/O conference. Android Studio is built upon Gradle architecture which is written in Java language. SQLite is a relational database operating system which has so common usage for mobile devices. It developed by using C programming library. It is mostly used via embedding into a software or application. It supports many operating systems including Android. Remote servers can be in several forms from high complexity to simplicity. For this project we will use a open source quad core board computer named Raspberry Pi 2. This device includes 900 MHz ARMv7 compatible quad core CPU, VideoCore IV GPU and 1 GB RAM. Although Raspberry Pi 2's main operating system is Raspbian, we use Debian which are both Linux based operating systems. Raspberry is compatible for many programming language, however some languages are optimized for this device. These are Python, Java, C, C++, Ruby, Perl and Squeak Smalltalk. In this paper, a mobile application will be developed to send coordinate and string data to a SQL database embedded to a remote server. The application will run on Android Operating System running mobile phone. The application will get the location information from the GNSS and cellular data. The user will enter the other information individually. These information will send by clicking a button to remote server which runs SQLite. All these informations will be convertible to any type of measure like type of coordinates could be converted from WGS 84 to ITRF.

  16. Transient Turbine Engine Modeling with Hardware-in-the-Loop Power Extraction (PREPRINT)

    DTIC Science & Technology

    2008-07-01

    Furthermore, it must be compatible with a real - time operating system that is capable of running the simulation. For some models, especially those that use...problem of interfacing the engine/control model to a real - time operating system and associated lab hardware becomes a problem of interfacing these...model in real-time. This requires the use of a real - time operating system and a compatible I/O (input/output) board. Figure 1 illustrates the HIL

  17. Build your own low-cost seismic/bathymetric recorder annotator

    USGS Publications Warehouse

    Robinson, W.

    1994-01-01

    An inexpensive programmable annotator, completely compatible with at least three models of widely used graphic recorders (Raytheon LSR-1811, Raytheon LSR-1807 M, and EDO 550) has been developed to automatically write event marks and print up to sixteen numbers on the paper record. Event mark and character printout intervals, character height and character position are all selectable with front panel switches. Operation is completely compatible with recorders running in either continuous or start-stop mode. ?? 1994.

  18. Analysis and control of supersonic vortex breakdown flows

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1990-01-01

    Analysis and computation of steady, compressible, quasi-axisymmetric flow of an isolated, slender vortex are considered. The compressible, Navier-Stokes equations are reduced to a simpler set by using the slenderness and quasi-axisymmetry assumptions. The resulting set along with a compatibility equation are transformed from the diverging physical domain to a rectangular computational domain. Solving for a compatible set of initial profiles and specifying a compatible set of boundary conditions, the equations are solved using a type-differencing scheme. Vortex breakdown locations are detected by the failure of the scheme to converge. Computational examples include isolated vortex flows at different Mach numbers, external axial-pressure gradients and swirl ratios.

  19. Desk-top publishing using IBM-compatible computers.

    PubMed

    Grencis, P W

    1991-01-01

    This paper sets out to describe one Medical Illustration Departments' experience of the introduction of computers for desk-top publishing. In this particular case, after careful consideration of all the options open, an IBM-compatible system was installed rather than the often popular choice of an Apple Macintosh.

  20. Development of a UNIX network compatible reactivity computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, R.F.; Edwards, R.M.

    1996-12-31

    A state-of-the-art UNIX network compatible controller and UNIX host workstation with MATLAB/SIMULINK software were used to develop, implement, and validate a digital reactivity calculation. An objective of the development was to determine why a Macintosh-based reactivity computer reactivity output drifted intolerably.

  1. Enhanced Graphics for Extended Scale Range

    NASA Technical Reports Server (NTRS)

    Hanson, Andrew J.; Chi-Wing Fu, Philip

    2012-01-01

    Enhanced Graphics for Extended Scale Range is a computer program for rendering fly-through views of scene models that include visible objects differing in size by large orders of magnitude. An example would be a scene showing a person in a park at night with the moon, stars, and galaxies in the background sky. Prior graphical computer programs exhibit arithmetic and other anomalies when rendering scenes containing objects that differ enormously in scale and distance from the viewer. The present program dynamically repartitions distance scales of objects in a scene during rendering to eliminate almost all such anomalies in a way compatible with implementation in other software and in hardware accelerators. By assigning depth ranges correspond ing to rendering precision requirements, either automatically or under program control, this program spaces out object scales to match the precision requirements of the rendering arithmetic. This action includes an intelligent partition of the depth buffer ranges to avoid known anomalies from this source. The program is written in C++, using OpenGL, GLUT, and GLUI standard libraries, and nVidia GEForce Vertex Shader extensions. The program has been shown to work on several computers running UNIX and Windows operating systems.

  2. Benchmarked analyses of gamma skyshine using MORSE-CGA-PC and the DABL69 cross-section set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reichert, P.T.; Golshani, M.

    1991-01-01

    Design for gamma-ray skyshine is a common consideration for a variety of nuclear and accelerator facilities. Many of these designs can benefit from a more accurate and complete treatment than can be provided by simple skyshine analysis tools. Those methods typically require a number of conservative, simplifying assumptions in modeling the radiation source and shielding geometry. This paper considers the benchmarking of one analytical option. The MORSE-CGA Monte Carlo radiation transport code system provides the capability for detailed treatment of virtually any source and shielding geometry. Unfortunately, the mainframe computer costs of MORSE-CGA analyses can prevent cost-effective application to smallmore » projects. For this reason, the MORSE-CGA system was converted to run on IBM personal computer (PC)-compatible computers using the Intel 80386 or 80486 microprocessors. The DLC-130/DABL69 cross-section set (46n,23g) was chosen as the most suitable, readily available, broad-group library. The most important reason is the relatively high (P{sub 5}) Legendre order of expansion for angular distribution. This is likely to be beneficial in the deep-penetration conditions modeled in some skyshine problems.« less

  3. PC-CUBE: A Personal Computer Based Hypercube

    NASA Technical Reports Server (NTRS)

    Ho, Alex; Fox, Geoffrey; Walker, David; Snyder, Scott; Chang, Douglas; Chen, Stanley; Breaden, Matt; Cole, Terry

    1988-01-01

    PC-CUBE is an ensemble of IBM PCs or close compatibles connected in the hypercube topology with ordinary computer cables. Communication occurs at the rate of 115.2 K-band via the RS-232 serial links. Available for PC-CUBE is the Crystalline Operating System III (CrOS III), Mercury Operating System, CUBIX and PLOTIX which are parallel I/O and graphics libraries. A CrOS performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. Also available are CXLISP, a parallel version of the XLISP interpreter; GRAFIX, some graphics routines for the EGA and CGA; and a general execution profiler for determining execution time spent by program subroutines. PC-CUBE provides a programming environment similar to all hypercube systems running CrOS III, Mercury and CUBIX. In addition, every node (personal computer) has its own graphics display monitor and storage devices. These allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. Some application programs which are taken from the book Solving Problems on Concurrent Processors (Fox 88) were implemented with graphics enhancement on PC-CUBE. The applications range from solving the Mandelbrot set, Laplace equation, wave equation, long range force interaction, to WaTor, an ecological simulation.

  4. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  5. Tracking at High Level Trigger in CMS

    NASA Astrophysics Data System (ADS)

    Tosi, M.

    2016-04-01

    The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capability. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Track reconstruction algorithms are widely used in the HLT, for the reconstruction of the physics objects as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II. We will present the performance of HLT tracking algorithms, discussing its impact on CMS physics program, as well as new developments done towards the next data taking in 2015.

  6. Training Software in Artificial-Intelligence Computing Techniques

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Rogstad, Eric; Chalfant, Eugene

    2005-01-01

    The Artificial Intelligence (AI) Toolkit is a computer program for training scientists, engineers, and university students in three soft-computing techniques (fuzzy logic, neural networks, and genetic algorithms) used in artificial-intelligence applications. The program promotes an easily understandable tutorial interface, including an interactive graphical component through which the user can gain hands-on experience in soft-computing techniques applied to realistic example problems. The tutorial provides step-by-step instructions on the workings of soft-computing technology, whereas the hands-on examples allow interaction and reinforcement of the techniques explained throughout the tutorial. In the fuzzy-logic example, a user can interact with a robot and an obstacle course to verify how fuzzy logic is used to command a rover traverse from an arbitrary start to the goal location. For the genetic algorithm example, the problem is to determine the minimum-length path for visiting a user-chosen set of planets in the solar system. For the neural-network example, the problem is to decide, on the basis of input data on physical characteristics, whether a person is a man, woman, or child. The AI Toolkit is compatible with the Windows 95,98, ME, NT 4.0, 2000, and XP operating systems. A computer having a processor speed of at least 300 MHz, and random-access memory of at least 56MB is recommended for optimal performance. The program can be run on a slower computer having less memory, but some functions may not be executed properly.

  7. Another Program For Generating Interactive Graphics

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S

  8. SuperState: a computer program for the control of operant behavioral experimentation.

    PubMed

    Zhang, Fuqiang

    2006-09-15

    Operant behavioral researches require precise control of experimental devices for delivering stimuli and monitoring behavioral responses. The author developed a software solution named SuperState for controlling hardware devices and running reinforcement schedules. The Microsoft Windows compatible software was written by use of an object-oriented programming language Borland Delphi 5.0, which has simplified the programming of the application. SuperState is a stand-alone easy-to-use green software, without the need for the experimenter to master any scripting languages. It features: (1) control of multiple operant cages running independent reinforcement schedules; (2) enough cage devices (16 digital inputs and 16 digital outputs for each cage) suitable for the need of most operant behavioral equipments; (3) control of most standard ISA-type digital interface cards including Med-Associates Super-port cards and a PCI-type card AC6412, and highly expandable to support other PCI-type interface cards; (4) high-resolution device control (1ms); (5) a built-in real-time cumulative recorder; (6) extensive data analyzing including event recorder, cumulative recorder, block analyzing; the summarized results can be transferred easily to Microsoft Excel spreadsheets through the Clipboard.

  9. Axionic landscape for Higgs coupling near-criticality

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Espinosa, José R.

    2018-02-01

    The measured value of the Higgs quartic coupling λ is peculiarly close to the critical value above which the Higgs potential becomes unstable, when extrapolated to high scales by renormalization group running. It is tempting to speculate that there is an anthropic reason behind this near-criticality. We show how an axionic field can provide a landscape of vacuum states in which λ scans. These states are populated during inflation to create a multiverse with different quartic couplings, with a probability distribution P that can be computed. If P is peaked in the anthropically forbidden region of Higgs instability, then the most probable universe compatible with observers would be close to the boundary, as observed. We discuss three scenarios depending on the Higgs vacuum selection mechanism: decay by quantum tunneling, by thermal fluctuations, or by inflationary fluctuations.

  10. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    NASA Astrophysics Data System (ADS)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  11. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    PubMed

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  12. LANDSAT-D data format control book. Volume 6, appendix D: Thematic mapper Computer Compatible Tape (CCT-AT/PT)

    NASA Technical Reports Server (NTRS)

    Ahmed, H.

    1981-01-01

    The format of computer compatible tapes which contain LANDSAT 4 and D Prime thematic mapper data is defined. A complete specification of the CCT-AT (radiometric corrections applied and geometric matrices appended) and the CCT-PT (radiometric and geometric corrections) data formats is provided.

  13. Development of a Low-Cost Sub-Scale Aircraft for Flight Research: The FASER Project

    NASA Technical Reports Server (NTRS)

    Owens, Donald B.; Cox, David E.; Morelli, Eugene A.

    2006-01-01

    An inexpensive unmanned sub-scale aircraft was developed to conduct frequent flight test experiments for research and demonstration of advanced dynamic modeling and control design concepts. This paper describes the aircraft, flight systems, flight operations, and data compatibility including details of some practical problems encountered and the solutions found. The aircraft, named Free-flying Aircraft for Sub-scale Experimental Research, or FASER, was outfitted with high-quality instrumentation to measure aircraft inputs and states, as well as vehicle health parameters. Flight data are stored onboard, but can also be telemetered to a ground station in real time for analysis. Commercial-off-the-shelf hardware and software were used as often as possible. The flight computer is based on the PC104 platform, and runs xPC-Target software. Extensive wind tunnel testing was conducted with the same aircraft used for flight testing, and a six degree-of-freedom simulation with nonlinear aerodynamics was developed to support flight tests. Flight tests to date have been conducted to mature the flight operations, validate the instrumentation, and check the flight data for kinematic consistency. Data compatibility analysis showed that the flight data are accurate and consistent after corrections are made for estimated systematic instrumentation errors.

  14. Delivering integrated HAZUS-MH flood loss analyses and flood inundation maps over the Web.

    PubMed

    Hearn, Paul P; Longenecker, Herbert E; Aguinaldo, John J; Rahav, Ami N

    2013-01-01

    Catastrophic flooding is responsible for more loss of life and damages to property than any other natural hazard. Recently developed flood inundation mapping technologies make it possible to view the extent and depth of flooding on the land surface over the Internet; however, by themselves these technologies are unable to provide estimates of losses to property and infrastructure. The Federal Emergency Management Agency's (FEMA's) HAZUS-MH software is extensively used to conduct flood loss analyses in the United States, providing a nationwide database of population and infrastructure at risk. Unfortunately, HAZUS-MH requires a dedicated Geographic Information System (GIS) workstation and a trained operator, and analyses are not adapted for convenient delivery over the Web. This article describes a cooperative effort by the US Geological Survey (USGS) and FEMA to make HAZUS-MH output GIS and Web compatible and to integrate these data with digital flood inundation maps in USGS's newly developed Inundation Mapping Web Portal. By running the computationally intensive HAZUS-MH flood analyses offline and converting the output to a Web-GIS compatible format, detailed estimates of flood losses can now be delivered to anyone with Internet access, thus dramatically increasing the availability of these forecasts to local emergency planners and first responders.

  15. Delivering integrated HAZUS-MH flood loss analyses and flood inundation maps over the Web

    USGS Publications Warehouse

    Hearn,, Paul P.; Longenecker, Herbert E.; Aguinaldo, John J.; Rahav, Ami N.

    2013-01-01

    Catastrophic flooding is responsible for more loss of life and damages to property than any other natural hazard. Recently developed flood inundation mapping technologies make it possible to view the extent and depth of flooding on the land surface over the Internet; however, by themselves these technologies are unable to provide estimates of losses to property and infrastructure. The Federal Emergency Management Agency’s (FEMA's) HAZUS-MH software is extensively used to conduct flood loss analyses in the United States, providing a nationwide database of population and infrastructure at risk. Unfortunately, HAZUS-MH requires a dedicated Geographic Information System (GIS) workstation and a trained operator, and analyses are not adapted for convenient delivery over the Web. This article describes a cooperative effort by the US Geological Survey (USGS) and FEMA to make HAZUS-MH output GIS and Web compatible and to integrate these data with digital flood inundation maps in USGS’s newly developed Inundation Mapping Web Portal. By running the computationally intensive HAZUS-MH flood analyses offline and converting the output to a Web-GIS compatible format, detailed estimates of flood losses can now be delivered to anyone with Internet access, thus dramatically increasing the availability of these forecasts to local emergency planners and first responders.

  16. Maclisp extensions

    NASA Technical Reports Server (NTRS)

    Bawden, A.; Burke, G. S.; Hoffman, C. W.

    1981-01-01

    A common subset of selected facilities available in Maclisp and its derivatives (PDP-10 and Multics Maclisp, Lisp Machine Lisp (Zetalisp), and NIL) is decribed. The object is to add in writing code which can run compatibly in more than one of these environments.

  17. Spinors: A Mathematica package for doing spinor calculus in General Relativity

    NASA Astrophysics Data System (ADS)

    Gómez-Lobo, Alfonso García-Parrado; Martín-García, José M.

    2012-10-01

    The Spinors software is a Mathematica package which implements 2-component spinor calculus as devised by Penrose for General Relativity in dimension 3+1. The Spinors software is part of the xAct system, which is a collection of Mathematica packages to do tensor analysis by computer. In this paper we give a thorough description of Spinors and present practical examples of use. Program summary Program title: Spinors Catalogue identifier: AEMQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 117039 No. of bytes in distributed program, including test data, etc.: 300404 Distribution format: tar.gz Programming language: Mathematica. Computer: Any computer running Mathematica 7.0 or higher. Operating system: Any operating system compatible with Mathematica 7.0 or higher. RAM: 94Mb in Mathematica 8.0. Classification: 1.5. External routines: Mathematica packages xCore, xPerm and xTensor which are part of the xAct system. These can be obtained at http://www.xact.es. Nature of problem: Manipulation and simplification of spinor expressions in General Relativity. Solution method: Adaptation of the tensor functionality of the xAct system for the specific situation of spinor calculus in four dimensional Lorentzian geometry. Restrictions: The software only works on 4-dimensional Lorentzian space-times with metric of signature (1, -1, -1, -1). There is no direct support for Dirac spinors. Unusual features: Easy rules to transform tensor expressions into spinor ones and back. Seamless integration of abstract index manipulation of spinor expressions with component computations. Running time: Under one second to handle and canonicalize standard spinorial expressions with a few dozen indices. (These expressions arise naturally in the transformation of a spinor expression into a tensor one or vice versa.)

  18. Digital data, composite video multiplexer and demultiplexer boards for an IBM PC/AT compatible computer

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    1993-01-01

    Work continued on the design of two IBM PC/AT compatible computer interface boards. The boards will permit digital data to be transmitted over a composite video channel from the Orbiter. One board combines data with a composite video signal. The other board strips the data from the video signal.

  19. Enforcing compatibility and constraint conditions and information retrieval at the design action

    NASA Technical Reports Server (NTRS)

    Woodruff, George W.

    1990-01-01

    The design of complex entities is a multidisciplinary process involving several interacting groups and disciplines. There is a need to integrate the data in such environments to enhance the collaboration between these groups and to enforce compatibility between dependent data entities. This paper discusses the implementation of a workstation based CAD system that is integrated with a DBMS and an expert system, CLIPS, (both implemented on a mini computer) to provide such collaborative and compatibility enforcement capabilities. The current implementation allows for a three way link between the CAD system, the DBMS and CLIPS. The engineering design process associated with the design and fabrication of sheet metal housing for computers in a large computer manufacturing facility provides the basis for this prototype system.

  20. 76 FR 23824 - Guidance for Industry: “Computer Crossmatch” (Computerized Analysis of the Compatibility Between...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... the Compatibility Between the Donor's Cell Type and the Recipient's Serum or Plasma Type... Crossmatch' (Computerized Analysis of the Compatibility between the Donor's Cell Type and the Recipient's... donor's cell type and the recipient's serum or plasma type. The guidance describes practices that we...

  1. TRoPICALS: A Computational Embodied Neuroscience Model of Compatibility Effects

    ERIC Educational Resources Information Center

    Caligiore, Daniele; Borghi, Anna M.; Parisi, Domenico; Baldassarre, Gianluca

    2010-01-01

    Perceiving objects activates the representation of their affordances. For example, experiments on compatibility effects showed that categorizing objects by producing certain handgrips (power or precision) is faster if the requested responses are compatible with the affordance elicited by the size of objects (e.g., small or large). The article…

  2. Development of a GPU Compatible Version of the Fast Radiation Code RRTMG

    NASA Astrophysics Data System (ADS)

    Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.

    2012-12-01

    The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.

  3. TS-SRP/PACK - COMPUTER PROGRAMS TO CHARACTERIZE ALLOYS AND PREDICT CYCLIC LIFE USING THE TOTAL STRAIN VERSION OF STRAINRANGE PARTITIONING

    NASA Technical Reports Server (NTRS)

    Saltsman, J. F.

    1994-01-01

    TS-SRP/PACK is a set of computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of the Strainrange Partitioning (TS-SRP). The user should be thoroughly familiar with the TS-SRP method before attempting to use any of these programs. The document for this program includes a theory manual as well as a detailed user's manual with a tutorial to guide the user in the proper use of TS-SRP. An extensive database has also been developed in a parallel effort. This database is an excellent source of high-temperature, creep-fatigue test data and can be used with other life-prediction methods as well. Five programs are included in TS-SRP/PACK along with the alloy database. The TABLE program is used to print the datasets, which are in NAMELIST format, in a reader friendly format. INDATA is used to create new datasets or add to existing ones. The FAIL program is used to characterize the failure behavior of an alloy as given by the constants in the strainrange-life relations used by the total strain version of SRP (TS-SRP) and the inelastic strainrange-based version of SRP. The program FLOW is used to characterize the flow behavior (the constitutive response) of an alloy as given by the constants in the flow equations used by TS-SRP. Finally, LIFE is used to predict the life of a specified cycle, using the constants characterizing failure and flow behavior determined by FAIL and FLOW. LIFE is written in interpretive BASIC to avoid compiling and linking every time the equation constants are changed. Four out of five programs in this package are written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS and are designed to read data using the NAMELIST format statement. The fifth is written in BASIC version 3.0 for IBM PC series and compatible computers running MS-DOS version 3.10. The executables require at least 239K of memory and DOS 3.1 or higher. To compile the source, a Lahey FORTRAN compiler is required. Source code modifications will be necessary if the compiler to be used does not support NAMELIST input. Probably the easiest revision to make is to use a list-directed READ statement. The standard distribution medium for this program is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. TS-SRP/PACK was developed in 1992.

  4. Gfargo: Fargo for Gpu

    NASA Astrophysics Data System (ADS)

    Masset, Frédéric

    2015-09-01

    GFARGO is a GPU version of FARGO. It is written in C and C for CUDA and runs only on NVIDIA’s graphics cards. Though it corresponds to the standard, isothermal version of FARGO, not all functionnalities of the CPU version have been translated to CUDA. The code is available in single and double precision versions, the latter compatible with FERMI architectures. GFARGO can run on a graphics card connected to the display, allowing the user to see in real time how the fields evolve.

  5. Experience in Using a Finite Element Stress and Vibration Package on a Minicomputer,

    DTIC Science & Technology

    1982-01-01

    as the Gra’phics Oricntat.ed Interactive Finite Element Time Sharing Pacl’age ( GIFTS ). This packge has been running on a PDP11/60 minicomputer...Unlike many other FEM packages, GIFTS consists of a collecticon E of fully compatible special purpose programns operating on a se. ef files on disk known...matrix is initiated by running the appropriate ptrojrF:’. from the GIFTS library. The following if, a list of the major (IFtS library programs with a

  6. HAL/S-360 compiler system specification

    NASA Technical Reports Server (NTRS)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  7. MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Timossi, Chris

    2006-10-19

    Mono is an independent implementation of the .NET Frameworkby Novell that runs on multiple operating systems (including Windows,Linux and Macintosh) and allows any .NET compatible application to rununmodified. For instance Mono can run programs with graphical userinterfaces (GUI) developed with the C# language on Windows with VisualStudio (a full port of WinForm for Mono is in progress). We present theresults of tests we performed to evaluate the portability of our controlssystem .NET applications from MS Windows to Linux.

  8. Development of automated electromagnetic compatibility test facilities at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Harrison, Cecil A.

    1986-01-01

    The efforts to automate the electromagentic compatibility (EMC) test facilites at Marshall Flight Center were examined. A battery of nine standard tests is to be integrated by means of a desktop computer-controller in order to provide near real-time data assessment, store the data acquired during testing on flexible disk, and provide computer production of the certification report.

  9. Next generation keyboards: The importance of cognitive compatibility

    NASA Technical Reports Server (NTRS)

    Amell, John R.; Ewry, Michael E.; Colle, Herbert A.

    1988-01-01

    The computer keyboard of today is essentially the same as it has been for many years. Few advances have been made in keyboard design even though computer systems in general have made remarkable progress in improvements. This paper discusses the future of keyboards, their competition and compatibility with voice input systems, and possible special-application intelligent keyboards for controlling complex systems.

  10. The Weekly Fab Five: Things You Should Do Every Week To Keep Your Computer Running in Tip-Top Shape.

    ERIC Educational Resources Information Center

    Crispen, Patrick

    2001-01-01

    Describes five steps that school librarians should follow every week to keep their computers running at top efficiency. Explains how to update virus definitions; run Windows update; run ScanDisk to repair errors on the hard drive; run a disk defragmenter; and backup all data. (LRW)

  11. Universality of Generalized Bunching and Efficient Assessment of Boson Sampling.

    PubMed

    Shchesnovich, V S

    2016-03-25

    It is found that identical bosons (fermions) show a generalized bunching (antibunching) property in linear networks: the absolute maximum (minimum) of the probability that all N input particles are detected in a subset of K output modes of any nontrivial linear M-mode network is attained only by completely indistinguishable bosons (fermions). For fermions K is arbitrary; for bosons it is either (i) arbitrary for only classically correlated bosons or (ii) satisfies K≥N (or K=1) for arbitrary input states of N particles. The generalized bunching allows us to certify in a polynomial in N number of runs that a physical device realizing boson sampling with an arbitrary network operates in the regime of full quantum coherence compatible only with completely indistinguishable bosons. The protocol needs only polynomial classical computations for the standard boson sampling, whereas an analytic formula is available for the scattershot version.

  12. A Unified Framework for Periodic, On-Demand, and User-Specified Software Information

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.

    2004-01-01

    Although grid computing can increase the number of resources available to a user; not all resources on the grid may have a software environment suitable for running a given application. To provide users with the necessary assistance for selecting resources with compatible software environments and/or for automatically establishing such environments, it is necessary to have an accurate source of information about the software installed across the grid. This paper presents a new OGSI-compliant software information service that has been implemented as part of NASA's Information Power Grid project. This service is built on top of a general framework for reconciling information from periodic, on-demand, and user-specified sources. Information is retrieved using standard XPath queries over a single unified namespace independent of the information's source. Two consumers of the provided software information, the IPG Resource Broker and the IPG Neutralization Service, are briefly described.

  13. Updated System-Availability and Resource-Allocation Program

    NASA Technical Reports Server (NTRS)

    Viterna, Larry

    2004-01-01

    A second version of the Availability, Cost and Resource Allocation (ACARA) computer program has become available. The first version was reported in an earlier tech brief. To recapitulate: ACARA analyzes the availability, mean-time-between-failures of components, life-cycle costs, and scheduling of resources of a complex system of equipment. ACARA uses a statistical Monte Carlo method to simulate the failure and repair of components while complying with user-specified constraints on spare parts and resources. ACARA evaluates the performance of the system on the basis of a mathematical model developed from a block-diagram representation. The previous version utilized the MS-DOS operating system and could not be run by use of the most recent versions of the Windows operating system. The current version incorporates the algorithms of the previous version but is compatible with Windows and utilizes menus and a file-management approach typical of Windows-based software.

  14. An expert system for prediction of aquatic toxicity of contaminants

    USGS Publications Warehouse

    Hickey, James P.; Aldridge, Andrew J.; Passino, Dora R. May; Frank, Anthony M.; Hushon, Judith M.

    1990-01-01

    The National Fisheries Research Center-Great Lakes has developed an interactive computer program in muLISP that runs on an IBM-compatible microcomputer and uses a linear solvation energy relationship (LSER) to predict acute toxicity to four representative aquatic species from the detailed structure of an organic molecule. Using the SMILES formalism for a chemical structure, the expert system identifies all structural components and uses a knowledge base of rules based on an LSER to generate four structure-related parameter values. A separate module then relates these values to toxicity. The system is designed for rapid screening of potential chemical hazards before laboratory or field investigations are conducted and can be operated by users with little toxicological background. This is the first expert system based on LSER, relying on the first comprehensive compilation of rules and values for the estimation of LSER parameters.

  15. A combined system for measuring animal motion activities.

    PubMed

    Young, M S; Young, C W; Li, Y C

    2000-01-31

    In this study, we have developed a combined animal motion activity measurement system that combines an infrared light matrix subsystem with an ultrasonic phase shift subsystem for animal activity measurement. Accordingly, in conjunction with an IBM PC/AT compatible personal computer, the combined system has the advantages of both infrared and ultrasonic subsystems. That is, it can at once measure and directly analyze detailed changes in animal activity ranging from locomotion to tremor. The main advantages of this combined system are that it features real time data acquisition with the option of animated real time or recorded display/playback of the animal's motion. Additionally, under the multi-task operating condition of IBM PC, it can acquire and process behavior using both IR and ultrasound systems simultaneously. Traditional systems have had to make separate runs for gross and fine movement recording. This combined system can be profitably employed for normative behavioral activity studies and for neurological and pharmacological research.

  16. BioNetFit: a fitting tool compatible with BioNetGen, NFsim and distributed computing environments

    DOE PAGES

    Thomas, Brandon R.; Chylek, Lily A.; Colvin, Joshua; ...

    2015-11-09

    Rule-based models are analyzed with specialized simulators, such as those provided by the BioNetGen and NFsim open-source software packages. Here in this paper, we present BioNetFit, a general-purpose fitting tool that is compatible with BioNetGen and NFsim. BioNetFit is designed to take advantage of distributed computing resources. This feature facilitates fitting (i.e. optimization of parameter values for consistency with data) when simulations are computationally expensive.

  17. LightWAVE: Waveform and Annotation Viewing and Editing in a Web Browser.

    PubMed

    Moody, George B

    2013-09-01

    This paper describes LightWAVE, recently-developed open-source software for viewing ECGs and other physiologic waveforms and associated annotations (event markers). It supports efficient interactive creation and modification of annotations, capabilities that are essential for building new collections of physiologic signals and time series for research. LightWAVE is constructed of components that interact in simple ways, making it straightforward to enhance or replace any of them. The back end (server) is a common gateway interface (CGI) application written in C for speed and efficiency. It retrieves data from its data repository (PhysioNet's open-access PhysioBank archives by default, or any set of files or web pages structured as in PhysioBank) and delivers them in response to requests generated by the front end. The front end (client) is a web application written in JavaScript. It runs within any modern web browser and does not require installation on the user's computer, tablet, or phone. Finally, LightWAVE's scribe is a tiny CGI application written in Perl, which records the user's edits in annotation files. LightWAVE's data repository, back end, and front end can be located on the same computer or on separate computers. The data repository may be split across multiple computers. For compatibility with the standard browser security model, the front end and the scribe must be loaded from the same domain.

  18. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  19. Handling Input and Output for COAMPS

    NASA Technical Reports Server (NTRS)

    Fitzpatrick, Patrick; Tran, Nam; Li, Yongzuo; Anantharaj, Valentine

    2007-01-01

    Two suites of software have been developed to handle the input and output of the Coupled Ocean Atmosphere Prediction System (COAMPS), which is a regional atmospheric model developed by the Navy for simulating and predicting weather. Typically, the initial and boundary conditions for COAMPS are provided by a flat-file representation of the Navy s global model. Additional algorithms are needed for running the COAMPS software using global models. One of the present suites satisfies this need for running COAMPS using the Global Forecast System (GFS) model of the National Oceanic and Atmospheric Administration. The first step in running COAMPS downloading of GFS data from an Internet file-transfer-protocol (FTP) server computer of the National Centers for Environmental Prediction (NCEP) is performed by one of the programs (SSC-00273) in this suite. The GFS data, which are in gridded binary (GRIB) format, are then changed to a COAMPS-compatible format by another program in the suite (SSC-00278). Once a forecast is complete, still another program in the suite (SSC-00274) sends the output data to a different server computer. The second suite of software (SSC- 00275) addresses the need to ingest up-to-date land-use-and-land-cover (LULC) data into COAMPS for use in specifying typical climatological values of such surface parameters as albedo, aerodynamic roughness, and ground wetness. This suite includes (1) a program to process LULC data derived from observations by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Terra and Aqua satellites, (2) programs to derive new climatological parameters for the 17-land-use-category MODIS data; and (3) a modified version of a FORTRAN subroutine to be used by COAMPS. The MODIS data files are processed to reformat them into a compressed American Standard Code for Information Interchange (ASCII) format used by COAMPS for efficient processing.

  20. Xyce parallel electronic simulator users guide, version 6.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas; Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers; A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models; Device models that are specifically tailored to meet Sandia's needs, including some radiationaware devices (for Sandia users only); and Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase-a message passing parallel implementation-which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  1. Xyce parallel electronic simulator users' guide, Version 6.0.1.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  2. Xyce parallel electronic simulator users guide, version 6.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandias needs, including some radiationaware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase a message passing parallel implementation which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  3. Program For Generating Interactive Displays

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute

  4. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  5. RP-1 Thermal Stability and Copper Based Materials Compatibility Study

    NASA Technical Reports Server (NTRS)

    Stiegemeier, B. R.; Meyer, M. L.; Driscoll, E.

    2005-01-01

    A series of electrically heated tube tests was performed at the NASA Glenn Research Center s Heated Tube Facility to investigate the effect that sulfur content, test duration, and tube material play in the overall thermal stability and materials compatibility characteristics of RP-1. Scanning-electron microscopic (SEM) analysis in conjunction with energy dispersive spectroscopy (EDS) were used to characterize the condition of the tube inner wall surface and any carbon deposition or corrosion formed during these runs. Results of the parametric study indicate that tests with standard RP-1 (total sulfur -23 ppm) and pure copper tubing are characterized by a depostion/deposit shedding process producing local wall temperature swings as high as 500 F. The effect of this shedding is to keep total carbon deposition levels relatively constant for run times from 20 minutes up to 5 hours, though increasing tube pressure drops were observed in all runs. Reduction in the total sulfur content of the fuel from 23 ppm to less than 0.1 ppm resulted in the elimination of deposit shedding, local wall temperature variation, and the tube pressure drop increases that were observed in standard sulfur level RP-1 tests. The copper alloy GRCop-84, a copper alloy developed specifically for high heat flux applications, was found to exhibit higher carbon deposition levels compared to identical tests performed in pure copper tubes. Results of the study are consistent with previously published heated tube data which indicates that small changes in fuel total sulfur content can lead to significant differences in the thermal stability of kerosene type fuels and their compatibility with copper based materials. In conjunction with the existing thermal stability database, these findings give insight into the feasibility of cooling a long life, high performance, high-pressure liquid rocket combustor and nozzle with RP-1.

  6. Subtraction with hadronic initial states at NLO: an NNLO-compatible scheme

    NASA Astrophysics Data System (ADS)

    Somogyi, Gábor

    2009-05-01

    We present an NNLO-compatible subtraction scheme for computing QCD jet cross sections of hadron-initiated processes at NLO accuracy. The scheme is constructed specifically with those complications in mind, that emerge when extending the subtraction algorithm to next-to-next-to-leading order. It is therefore possible to embed the present scheme in a full NNLO computation without any modifications.

  7. Robust integration schemes for junction-based modulators in a 200mm CMOS compatible silicon photonic platform (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Szelag, Bertrand; Abraham, Alexis; Brision, Stéphane; Gindre, Paul; Blampey, Benjamin; Myko, André; Olivier, Segolene; Kopp, Christophe

    2017-05-01

    Silicon photonic is becoming a reality for next generation communication system addressing the increasing needs of HPC (High Performance Computing) systems and datacenters. CMOS compatible photonic platforms are developed in many foundries integrating passive and active devices. The use of existing and qualified microelectronics process guarantees cost efficient and mature photonic technologies. Meanwhile, photonic devices have their own fabrication constraints, not similar to those of cmos devices, which can affect their performances. In this paper, we are addressing the integration of PN junction Mach Zehnder modulator in a 200mm CMOS compatible photonic platform. Implantation based device characteristics are impacted by many process variations among which screening layer thickness, dopant diffusion, implantation mask overlay. CMOS devices are generally quite robust with respect to these processes thanks to dedicated design rules. For photonic devices, the situation is different since, most of the time, doped areas must be carefully located within waveguides and CMOS solutions like self-alignment to the gate cannot be applied. In this work, we present different robust integration solutions for junction-based modulators. A simulation setup has been built in order to optimize of the process conditions. It consist in a Mathlab interface coupling process and device electro-optic simulators in order to run many iterations. Illustrations of modulator characteristic variations with process parameters are done using this simulation setup. Parameters under study are, for instance, X and Y direction lithography shifts, screening oxide and slab thicknesses. A robust process and design approach leading to a pn junction Mach Zehnder modulator insensitive to lithography misalignment is then proposed. Simulation results are compared with experimental datas. Indeed, various modulators have been fabricated with different process conditions and integration schemes. Extensive electro-optic characterization of these components will be presented.

  8. Parallel noise barrier prediction procedure : report 2 user's manual revision 1

    DOT National Transportation Integrated Search

    1987-11-01

    This report defines the parameters which are used to input the data required to run Program Barrier and BarrierX on a microcomputer such as an IBM PC or compatible. Directions for setting up and operating a working disk are presented. Examples of inp...

  9. IOS: PDP 11/45 formatted input/output task stacker and processer. [In MACRO-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koschik, J.

    1974-07-08

    IOS allows the programer to perform formated Input/Output at assembly language level to/from any peripheral device. It runs under DOS versions V8-O8 or V9-19, reading and writing DOS-compatible files. Additionally, IOS will run, with total transparency, in an environment with memory management enabled. Minimum hardware required is a 16K PDP 11/45, Keyboard Device, DISK (DK,DF, or DC), and Line Frequency Clock. The source language is MACRO-11 (3.3K Decimal Words).

  10. Video-Game-Like Engine for Depicting Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Upchurch, Paul R.

    2009-01-01

    GoView is a video-game-like software engine, written in the C and C++ computing languages, that enables real-time, three-dimensional (3D)-appearing visual representation of spacecraft and trajectories (1) from any perspective; (2) at any spatial scale from spacecraft to Solar-system dimensions; (3) in user-selectable time scales; (4) in the past, present, and/or future; (5) with varying speeds; and (6) forward or backward in time. GoView constructs an interactive 3D world by use of spacecraft-mission data from pre-existing engineering software tools. GoView can also be used to produce distributable application programs for depicting NASA orbital missions on personal computers running the Windows XP, Mac OsX, and Linux operating systems. GoView enables seamless rendering of Cartesian coordinate spaces with programmable graphics hardware, whereas prior programs for depicting spacecraft trajectories variously require non-Cartesian coordinates and/or are not compatible with programmable hardware. GoView incorporates an algorithm for nonlinear interpolation between arbitrary reference frames, whereas the prior programs are restricted to special classes of inertial and non-inertial reference frames. Finally, whereas the prior programs present complex user interfaces requiring hours of training, the GoView interface provides guidance, enabling use without any training.

  11. Update on radiation-hardened microcomputers for robotics and teleoperated systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sias, F.R. Jr.; Tulenko, J.S.

    1993-12-31

    Since many programs sponsored by the Department of Defense are being canceled, it is important to select carefully radiation-hardened microprocessors for projects that will mature (or will require continued support) several years in the future. At the present time there are seven candidate 32-bit processors that should be considered for long-range planning for high-performance radiation-hardened computer systems. For Department of Energy applications it is also important to consider efforts at standardization that require the use of the VxWorks operating system and hardware based on the VMEbus. Of the seven processors, one has been delivered and is operating and other systemsmore » are scheduled to be delivered late in 1993 or early in 1994. At the present time the Honeywell-developed RH32, the Harris RH-3000 and the Harris RHC-3000 are leading contenders for meeting DOE requirements for a radiation-hardened advanced 32-bit microprocessor. These are all either compatible with or are derivatives of the MIPS R3000 Reduced Instruction Set Computer. It is anticipated that as few as two of the seven radiation-hardened processors will be supported by the space program in the long run.« less

  12. Computer Technology Directory.

    ERIC Educational Resources Information Center

    Exceptional Parent, 1990

    1990-01-01

    This directory lists approximately 300 commercial vendors that offer computer hardware, software, and communication aids for children with disabilities. The company listings indicate computer compatibility and specific disabilities served by their products. (JDD)

  13. Formal Specification and Verification of Concurrent Programs

    DTIC Science & Technology

    1993-02-01

    of examples from the emerging theory of This book describes operating systems in general programming languages. via the construction of MINIX , a UNIX...look-alike that runs on IBM-PC compatibles. The book con- Wegner72 tains a complete MINIX manual and a complete Wegnerflisting of its C codie. egner

  14. The contaminant analysis automation robot implementation for the automated laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-12-31

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less

  15. Scilab software package for the study of dynamical systems

    NASA Astrophysics Data System (ADS)

    Bordeianu, C. C.; Beşliu, C.; Jipa, Al.; Felea, D.; Grossu, I. V.

    2008-05-01

    This work presents a new software package for the study of chaotic flows and maps. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well known examples are implemented, with the capability of the users inserting their own ODE. Program summaryProgram title: Chaos Catalogue identifier: AEAP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 885 No. of bytes in distributed program, including test data, etc.: 5925 Distribution format: tar.gz Programming language: Scilab 3.1.1 Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 100 Megabytes Classification: 6.2 Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincaré sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Restrictions: The package routines are normally able to handle ODE systems of high orders (up to order twelve and possibly higher), depending on the nature of the problem. Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE and Lyapunov exponents calculation.

  16. Simple Queueing Model Applied to the City of Portland

    NASA Astrophysics Data System (ADS)

    Simon, Patrice M.; Esser, Jörg; Nagel, Kai

    We use a simple traffic micro-simulation model based on queueing dynamics as introduced by Gawron [IJMPC, 9(3):393, 1998] in order to simulate traffic in Portland/Oregon. Links have a flow capacity, that is, they do not release more vehicles per second than is possible according to their capacity. This leads to queue built-up if demand exceeds capacity. Links also have a storage capacity, which means that once a link is full, vehicles that want to enter the link need to wait. This leads to queue spill-back through the network. The model is compatible with route-plan-based approaches such as TRANSIMS, where each vehicle attempts to follow its pre-computed path. Yet, both the data requirements and the computational requirements are considerably lower than for the full TRANSIMS microsimulation. Indeed, the model uses standard emme/2 network data, and runs about eight times faster than real time with more than 100 000 vehicles simultaneously in the simulation on a single Pentium-type CPU. We derive the model's fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20 000 links). Demand is generated by a simplified home-to-work destination assignment which generates about half a million trips for the morning peak. Route assignment is done by iterative feedback between micro-simulation and router. An iterative solution of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation. We compare results with field data and with results of traditional assignment runs by the Portland Metropolitan Planning Organization. Thus, with a model such as this one, it is possible to use a dynamic, activities-based approach to transportation simulation (such as in TRANSIMS) with affordable data and hardware. This should enable systematic research about the coupling of demand generation, route assignment, and micro-simulation output.

  17. System and method for controlling power consumption in a computer system based on user satisfaction

    DOEpatents

    Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok

    2014-04-22

    Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.

  18. METAGUI. A VMD interface for analyzing metadynamics and molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Biarnés, Xevi; Pietrucci, Fabio; Marinelli, Fabrizio; Laio, Alessandro

    2012-01-01

    We present a new computational tool, METAGUI, which extends the VMD program with a graphical user interface that allows constructing a thermodynamic and kinetic model of a given process simulated by large-scale molecular dynamics. The tool is specially designed for analyzing metadynamics based simulations. The huge amount of diverse structures generated during such a simulation is partitioned into a set of microstates (i.e. structures with similar values of the collective variables). Their relative free energies are then computed by a weighted-histogram procedure and the most relevant free energy wells are identified by diagonalization of the rate matrix followed by a commitor analysis. All this procedure leads to a convenient representation of the metastable states and long-time kinetics of the system which can be compared with experimental data. The tool allows to seamlessly switch between a collective variables space representation of microstates and their atomic structure representation, which greatly facilitates the set-up and analysis of molecular dynamics simulations. METAGUI is based on the output format of the PLUMED plugin, making it compatible with a number of different molecular dynamics packages like AMBER, NAMD, GROMACS and several others. The METAGUI source files can be downloaded from the PLUMED web site ( http://www.plumed-code.org). Program summaryProgram title: METAGUI Catalogue identifier: AEKH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 117 545 No. of bytes in distributed program, including test data, etc.: 8 516 203 Distribution format: tar.gz Programming language: TK/TCL, Fortran Computer: Any computer with a VMD installation and capable of running an executable produced by a gfortran compiler Operating system: Linux, Unix OS-es RAM: 1 073 741 824 bytes Classification: 23 External routines: A VMD installation ( http://www.ks.uiuc.edu/Research/vmd/) Nature of problem: Extract thermodynamic data and build a kinetic model of a given process simulated by metadynamics or molecular dynamics simulations, and provide this information on a dual representation that allows navigating and exploring the molecular structures corresponding to each point along the multi-dimensional free energy hypersurface. Solution method: Graphical-user interface linked to VMD that clusterizes the simulation trajectories in the space of a set of collective variables and assigns each frame to a given microstate, determines the free energy of each microstate by a weighted histogram analysis method, and identifies the most relevant free energy wells (kinetic basins) by diagonalization of the rate matrix followed by a commitor analysis. Restrictions: Input format files compatible with PLUMED and all the MD engines supported by PLUMED and VMD. Running time: A few minutes.

  19. Prototype of a computer method for designing and analyzing heating, ventilating and air conditioning proportional, electronic control systems

    NASA Astrophysics Data System (ADS)

    Barlow, Steven J.

    1986-09-01

    The Air Force needs a better method of designing new and retrofit heating, ventilating and air conditioning (HVAC) control systems. Air Force engineers currently use manual design/predict/verify procedures taught at the Air Force Institute of Technology, School of Civil Engineering, HVAC Control Systems course. These existing manual procedures are iterative and time-consuming. The objectives of this research were to: (1) Locate and, if necessary, modify an existing computer-based method for designing and analyzing HVAC control systems that is compatible with the HVAC Control Systems manual procedures, or (2) Develop a new computer-based method of designing and analyzing HVAC control systems that is compatible with the existing manual procedures. Five existing computer packages were investigated in accordance with the first objective: MODSIM (for modular simulation), HVACSIM (for HVAC simulation), TRNSYS (for transient system simulation), BLAST (for building load and system thermodynamics) and Elite Building Energy Analysis Program. None were found to be compatible or adaptable to the existing manual procedures, and consequently, a prototype of a new computer method was developed in accordance with the second research objective.

  20. Multimission Modular Spacecraft Ground Support Software System (MMS/GSSS) state-of-the-art computer systems/ compatibility study

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The compatibility of the Multimission Modular Spacecraft (MMS) Ground Support Software System (GSSS), currently operational on a ModComp IV/35, with the VAX 11/780 system is discussed. The compatibility is examined in various key areas of the GSSS through the results of in depth testing performed on the VAX 11/780 and ModComp IV/35 systems. The compatibility of the GSSS with the ModComp CLASSIC is presented based upon projections from ModComp supplied literature.

  1. Effects of long-term voluntary exercise on learning and memory processes: dependency of the task and level of exercise.

    PubMed

    García-Capdevila, Sílvia; Portell-Cortés, Isabel; Torras-Garcia, Meritxell; Coll-Andreu, Margalida; Costa-Miserachs, David

    2009-09-14

    The effect of long-term voluntary exercise (running wheel) on anxiety-like behaviour (plus maze and open field) and learning and memory processes (object recognition and two-way active avoidance) was examined on Wistar rats. Because major individual differences in running wheel behaviour were observed, the data were analysed considering the exercising animals both as a whole and grouped according to the time spent in the running wheel (low, high, and very-high running). Although some variables related to anxiety-like behaviour seem to reflect an anxiogenic compatible effect, the view of the complete set of variables could be interpreted as an enhancement of defensive and risk assessment behaviours in exercised animals, without major differences depending on the exercise level. Effects on learning and memory processes were dependent on task and level of exercise. Two-way avoidance was not affected either in the acquisition or in the retention session, while the retention of object recognition task was affected. In this latter task, an enhancement in low running subjects and impairment in high and very-high running animals were observed.

  2. Development of a low-cost virtual reality workstation for training and education

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) it involves 3-dimensional computer graphics; (2) it includes real-time feedback and response to user actions; and (3) it must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, but the high cost of VR technology has limited its practical application to fields with big budgets, such as military combat simulation, commercial pilot training, and certain projects within the space program. However, in the last year there has been a revolution in the cost of VR technology. The speed of inexpensive personal computers has increased dramatically, especially with the introduction of the Pentium processor and the PCI bus for IBM-compatibles, and the cost of high-quality virtual reality peripherals has plummeted. The result is that many public schools, colleges, and universities can afford a PC-based workstation capable of running immersive virtual reality applications. My goal this summer was to assemble and evaluate such a system.

  3. PC-SEAPAK - ANALYSIS OF COASTAL ZONE COLOR SCANNER AND ADVANCED VERY HIGH RESOLUTION RADIOMETER DATA

    NASA Technical Reports Server (NTRS)

    Mcclain, C. R.

    1994-01-01

    PC-SEAPAK is a user-interactive satellite data analysis software package specifically developed for oceanographic research. The program is used to process and interpret data obtained from the Nimbus-7/Coastal Zone Color Scanner (CZCS), and the NOAA Advanced Very High Resolution Radiometer (AVHRR). PC-SEAPAK is a set of independent microcomputer-based image analysis programs that provide the user with a flexible, user-friendly, standardized interface, and facilitates relatively low-cost analysis of oceanographic satellite data. Version 4.0 includes 114 programs. PC-SEAPAK programs are organized into categories which include CZCS and AVHRR level-1 ingest, level-2 analyses, statistical analyses, data extraction, remapping to standard projections, graphics manipulation, image board memory manipulation, hardcopy output support and general utilities. Most programs allow user interaction through menu and command modes and also by the use of a mouse. Most programs also provide for ASCII file generation for further analysis in spreadsheets, graphics packages, etc. The CZCS scanning radiometer aboard the NIMBUS-7 satellite was designed to measure the concentration of photosynthetic pigments and their degradation products in the ocean. AVHRR data is used to compute sea surface temperatures and is supported for the NOAA 6, 7, 8, 9, 10, 11, and 12 satellites. The CZCS operated from November 1978 to June 1986. CZCS data may be obtained free of charge from the CZCS archive at NASA/Goddard Space Flight Center. AVHRR data may be purchased through NOAA's Satellite Data Service Division. Ordering information is included in the PC-SEAPAK documentation. Although PC-SEAPAK was developed on a COMPAQ Deskpro 386/20, it can be run on most 386-compatible computers with an AT bus, EGA controller, Intel 80387 coprocessor, and MS-DOS 3.3 or higher. A Matrox MVP-AT image board with appropriate monitor and cables is also required. Note that the authors have received some reports of incompatibilities between the MVP-AT image board and ZENITH computers. Also, the MVP-AT image board is not necessarily compatible with 486-based systems; users of 486-based systems should consult with Matrox about compatibility concerns. Other PC-SEAPAK requirements include a Microsoft mouse (serial version), 2Mb RAM, and 100Mb hard disk space. For data ingest and backup, 9-track tape, 8mm tape and optical disks are supported and recommended. PC-SEAPAK has been under development since 1988. Version 4.0 was updated in 1992, and is distributed without source code. It is available only as a set of 36 1.2Mb 5.25 inch IBM MS-DOS format diskettes. PC-SEAPAK is a copyrighted product with all copyright vested in the National Aeronautics and Space Administration. Phar Lap's DOS_Extender run-time version is integrated into several of the programs; therefore, the PC-SEAPAK programs may not be duplicated. Three of the distribution diskettes contain DOS_Extender files. One of the distribution diskettes contains Media Cybernetics' HALO88 font files, also licensed by NASA for dissemination but not duplication. IBM is a registered trademark of International Business Machines. MS-DOS is a registered trademark of Microsoft Corporation. HALO88 is a registered trademark of Media Cybernetics, but the product was discontinued in 1991.

  4. Antiplagiarism Software Takes on the Honor Code

    ERIC Educational Resources Information Center

    Wasley, Paula

    2008-01-01

    Among the 100-odd colleges with academic honor codes, plagiarism-detection services raise a knotty problem: Is software compatible with a system based on trust? The answer frequently devolves to the size and culture of the university. Colleges with traditional student-run honor codes tend to "forefront" trust, emphasizing it above all else. This…

  5. Mobility and Human Factors Evaluation of Three Prototype Assault Snowshoes.

    DTIC Science & Technology

    1994-01-01

    toe of snowshoe beats shins (bruised) when running (3) Snowshoe tails strike your back when falling in prone position (n) - Number of subjects who...user’s boot and cradle his forefoot in the proper position. This feature worked adequately with the vapor barrier boot but was not compatible with the

  6. Using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.

    2016-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and software systems developed for NASA data repositories were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Object storage services are provided through all the leading public (Amazon Web Service, Microsoft Azure, Google Cloud, etc.) and private (Open Stack) clouds, and may provide a more cost-effective means of storing large data collections online. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows superior performance for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  7. Simulating three dimensional wave run-up over breakwaters covered by antifer units

    NASA Astrophysics Data System (ADS)

    Najafi-Jilani, A.; Niri, M. Zakiri; Naderi, Nader

    2014-06-01

    The paper presents the numerical analysis of wave run-up over rubble-mound breakwaters covered by antifer units using a technique integrating Computer-Aided Design (CAD) and Computational Fluid Dynamics (CFD) software. Direct application of Navier-Stokes equations within armour blocks, is used to provide a more reliable approach to simulate wave run-up over breakwaters. A well-tested Reynolds-averaged Navier-Stokes (RANS) Volume of Fluid (VOF) code (Flow-3D) was adopted for CFD computations. The computed results were compared with experimental data to check the validity of the model. Numerical results showed that the direct three dimensional (3D) simulation method can deliver accurate results for wave run-up over rubble mound breakwaters. The results showed that the placement pattern of antifer units had a great impact on values of wave run-up so that by changing the placement pattern from regular to double pyramid can reduce the wave run-up by approximately 30%. Analysis was done to investigate the influences of surface roughness, energy dissipation in the pores of the armour layer and reduced wave run-up due to inflow into the armour and stone layer.

  8. WinSCP for Windows File Transfers | High-Performance Computing | NREL

    Science.gov Websites

    WinSCP for Windows File Transfers WinSCP for Windows File Transfers WinSCP for can used to securely transfer files between your local computer running Microsoft Windows and a remote computer running Linux

  9. Expert systems for fault diagnosis in nuclear reactor control

    NASA Astrophysics Data System (ADS)

    Jalel, N. A.; Nicholson, H.

    1990-11-01

    An expert system for accident analysis and fault diagnosis for the Loss Of Fluid Test (LOFT) reactor, a small scale pressurized water reactor, was developed for a personal computer. The knowledge of the system is presented using a production rule approach with a backward chaining inference engine. The data base of the system includes simulated dependent state variables of the LOFT reactor model. Another system is designed to assist the operator in choosing the appropriate cooling mode and to diagnose the fault in the selected cooling system. The response tree, which is used to provide the link between a list of very specific accident sequences and a set of generic emergency procedures which help the operator in monitoring system status, and to differentiate between different accident sequences and select the correct procedures, is used to build the system knowledge base. Both systems are written in TURBO PROLOG language and can be run on an IBM PC compatible with 640k RAM, 40 Mbyte hard disk and color graphics.

  10. Mathematical programming formulations for satellite synthesis

    NASA Technical Reports Server (NTRS)

    Bhasin, Puneet; Reilly, Charles H.

    1987-01-01

    The problem of satellite synthesis can be described as optimally allotting locations and sometimes frequencies and polarizations, to communication satellites so that interference from unwanted satellite signals does not exceed a specified threshold. In this report, mathematical programming models and optimization methods are used to solve satellite synthesis problems. A nonlinear programming formulation which is solved using Zoutendijk's method and a gradient search method is described. Nine mixed integer programming models are considered. Results of computer runs with these nine models and five geographically compatible scenarios are presented and evaluated. A heuristic solution procedure is also used to solve two of the models studied. Heuristic solutions to three large synthesis problems are presented. The results of our analysis show that the heuristic performs very well, both in terms of solution quality and solution time, on the two models to which it was applied. It is concluded that the heuristic procedure is the best of the methods considered for solving satellite synthesis problems.

  11. Design considerations of CareWindows, a Windows 3.0-based graphical front end to a Medical Information Management System using a pass-through-requester architecture.

    PubMed Central

    Ward, R. E.; Purves, T.; Feldman, M.; Schiffman, R. M.; Barry, S.; Christner, M.; Kipa, G.; McCarthy, B. D.; Stiphout, R.

    1991-01-01

    The Care Windows development project demonstrated the feasibility of an approach designed to add the benefits of an event-driven, graphically-oriented user interface to an existing Medical Information Management System (MIMS) without overstepping economic and logistic constraints. The design solution selected for the Care Windows project incorporates three important design features: (1) the effective de-coupling of severs from requesters, permitting the use of an extensive pre-existing library of MIMS servers, (2) the off-loading of program control functions of the requesters to the workstation processor, reducing the load per transaction on central resources and permitting the use of object-oriented development environments available for microcomputers, (3) the selection of a low end, GUI-capable workstation consisting of a PC-compatible personal computer running Microsoft Windows 3.0, and (4) the development of a highly layered, modular workstation application, permitting the development of interchangeable modules to insure portability and adaptability. PMID:1807665

  12. DSISoft—a MATLAB VSP data processing package

    NASA Astrophysics Data System (ADS)

    Beaty, K. S.; Perron, G.; Kay, I.; Adam, E.

    2002-05-01

    DSISoft is a public domain vertical seismic profile processing software package developed at the Geological Survey of Canada. DSISoft runs under MATLAB version 5.0 and above and hence is portable between computer operating systems supported by MATLAB (i.e. Unix, Windows, Macintosh, Linux). The package includes processing modules for reading and writing various standard seismic data formats, performing data editing, sorting, filtering, and other basic processing modules. The processing sequence can be scripted allowing batch processing and easy documentation. A structured format has been developed to ensure future additions to the package are compatible with existing modules. Interactive modules have been created using MATLAB's graphical user interface builder for displaying seismic data, picking first break times, examining frequency spectra, doing f- k filtering, and plotting the trace header information. DSISoft modular design facilitates the incorporation of new processing algorithms as they are developed. This paper gives an overview of the scope of the software and serves as a guide for the addition of new modules.

  13. RAPPORT: running scientific high-performance computing applications on the cloud.

    PubMed

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  14. GEANT4 distributed computing for compact clusters

    NASA Astrophysics Data System (ADS)

    Harrawood, Brian P.; Agasthya, Greeshma A.; Lakshmanan, Manu N.; Raterman, Gretchen; Kapadia, Anuj J.

    2014-11-01

    A new technique for distribution of GEANT4 processes is introduced to simplify running a simulation in a parallel environment such as a tightly coupled computer cluster. Using a new C++ class derived from the GEANT4 toolkit, multiple runs forming a single simulation are managed across a local network of computers with a simple inter-node communication protocol. The class is integrated with the GEANT4 toolkit and is designed to scale from a single symmetric multiprocessing (SMP) machine to compact clusters ranging in size from tens to thousands of nodes. User designed 'work tickets' are distributed to clients using a client-server work flow model to specify the parameters for each individual run of the simulation. The new g4DistributedRunManager class was developed and well tested in the course of our Neutron Stimulated Emission Computed Tomography (NSECT) experiments. It will be useful for anyone running GEANT4 for large discrete data sets such as covering a range of angles in computed tomography, calculating dose delivery with multiple fractions or simply speeding the through-put of a single model.

  15. Interactional Personality, Mathematical Simulation, and Prediction of Interpersonal Compatability.

    ERIC Educational Resources Information Center

    Kunce, Joseph T.; And Others

    1981-01-01

    Used a mathematical simulation procedure adaptable to an interactional concept of personality to predict the interpersonal compatibility of couples. Strife scores derived from computer simulation of interactional personality data correlated significantly with partner ratings for the quality and the stability of their relationship. Significance…

  16. A user-friendly, menu-driven, language-free laser characteristics curves graphing program for desk-top IBM PC compatible computers

    NASA Technical Reports Server (NTRS)

    Klutz, Glenn

    1989-01-01

    A facility was established that uses collected data and feeds it into mathematical models that generate improved data arrays by correcting for various losses, base line drift, and conversion to unity scaling. These developed data arrays have headers and other identifying information affixed and are subsequently stored in a Laser Materials and Characteristics data base which is accessible to various users. The two part data base: absorption - emission spectra and tabulated data, is developed around twelve laser models. The tabulated section of the data base is divided into several parts: crystalline, optical, mechanical, and thermal properties; aborption and emission spectra information; chemical name and formulas; and miscellaneous. A menu-driven, language-free graphing program will reduce and/or remove the requirement that users become competent FORTRAN programmers and the concomitant requirement that they also spend several days to a few weeks becoming conversant with the GEOGRAF library and sequence of calls and the continual refreshers of both. The work included becoming thoroughly conversant with or at least very familiar with GEOGRAF by GEOCOMP Corp. The development of the graphing program involved trial runs of the various callable library routines on dummy data in order to become familiar with actual implementation and sequencing. This was followed by trial runs with actual data base files and some additional data from current research that was not in the data base but currently needed graphs. After successful runs, with dummy and real data, using actual FORTRAN instructions steps were undertaken to develop the menu-driven language-free implementation of a program which would require the user only know how to use microcomputers. The user would simply be responding to items displayed on the video screen. To assist the user in arriving at the optimum values needed for a specific graph, a paper, and pencil check list was made available to use on the trial runs.

  17. Computational Methods for Feedback Controllers for Aerodynamics Flow Applications

    DTIC Science & Technology

    2007-08-15

    Iteration #, and y-translation by: »> Fy=[unf(:,8);runA(:,8);runB(:,8);runC(:,8);runD(:,S); runE (:,8)]; >> Oy-[unf(:,23) ;runA(:,23) ;runB(:,23) ;runC(:,23...runD(:,23) ; runE (:,23)]; >> Iter-[unf(:,1);runA(U ,l);runB(:,l);runC(:,l) ;runD(:,l); runE (:,l)]; >> plot(Fy) Cobalt version 4.0 €blso!,,tic,,. ř-21

  18. Proposal for grid computing for nuclear applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Idris, Faridah Mohamad; Ismail, Saaidi; Haris, Mohd Fauzi B.

    2014-02-12

    The use of computer clusters for computational sciences including computational physics is vital as it provides computing power to crunch big numbers at a faster rate. In compute intensive applications that requires high resolution such as Monte Carlo simulation, the use of computer clusters in a grid form that supplies computational power to any nodes within the grid that needs computing power, has now become a necessity. In this paper, we described how the clusters running on a specific application could use resources within the grid, to run the applications to speed up the computing process.

  19. Effective Design of Multifunctional Peptides by Combining Compatible Functions

    PubMed Central

    Diener, Christian; Garza Ramos Martínez, Georgina; Moreno Blas, Daniel; Castillo González, David A.; Corzo, Gerardo; Castro-Obregon, Susana; Del Rio, Gabriel

    2016-01-01

    Multifunctionality is a common trait of many natural proteins and peptides, yet the rules to generate such multifunctionality remain unclear. We propose that the rules defining some protein/peptide functions are compatible. To explore this hypothesis, we trained a computational method to predict cell-penetrating peptides at the sequence level and learned that antimicrobial peptides and DNA-binding proteins are compatible with the rules of our predictor. Based on this finding, we expected that designing peptides for CPP activity may render AMP and DNA-binding activities. To test this prediction, we designed peptides that embedded two independent functional domains (nuclear localization and yeast pheromone activity), linked by optimizing their composition to fit the rules characterizing cell-penetrating peptides. These peptides presented effective cell penetration, DNA-binding, pheromone and antimicrobial activities, thus confirming the effectiveness of our computational approach to design multifunctional peptides with potential therapeutic uses. Our computational implementation is available at http://bis.ifc.unam.mx/en/software/dcf. PMID:27096600

  20. Integrated Force Method for Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Hopkins, Dale A.; Halford, Gary R.; Patnaik, Surya N.

    2008-01-01

    Two methods of solving indeterminate structural-mechanics problems have been developed as products of research on the theory of strain compatibility. In these methods, stresses are considered to be the primary unknowns (in contrast to strains and displacements being considered as the primary unknowns in some prior methods). One of these methods, denoted the integrated force method (IFM), makes it possible to compute stresses, strains, and displacements with high fidelity by use of modest finite-element models that entail relatively small amounts of computation. The other method, denoted the completed Beltrami Mitchell formulation (CBMF), enables direct determination of stresses in an elastic continuum with general boundary conditions, without the need to first calculate displacements as in traditional methods. The equilibrium equation, the compatibility condition, and the material law are the three fundamental concepts of the theory of structures. For almost 150 years, it has been commonly supposed that the theory is complete. However, until now, the understanding of the compatibility condition remained incomplete, and the compatibility condition was confused with the continuity condition. Furthermore, the compatibility condition as applied to structures in its previous incomplete form was inconsistent with the strain formulation in elasticity.

  1. CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment

    PubMed Central

    Manavski, Svetlin A; Valle, Giorgio

    2008-01-01

    Background Searching for similarities in protein and DNA databases has become a routine procedure in Molecular Biology. The Smith-Waterman algorithm has been available for more than 25 years. It is based on a dynamic programming approach that explores all the possible alignments between two sequences; as a result it returns the optimal local alignment. Unfortunately, the computational cost is very high, requiring a number of operations proportional to the product of the length of two sequences. Furthermore, the exponential growth of protein and DNA databases makes the Smith-Waterman algorithm unrealistic for searching similarities in large sets of sequences. For these reasons heuristic approaches such as those implemented in FASTA and BLAST tend to be preferred, allowing faster execution times at the cost of reduced sensitivity. The main motivation of our work is to exploit the huge computational power of commonly available graphic cards, to develop high performance solutions for sequence alignment. Results In this paper we present what we believe is the fastest solution of the exact Smith-Waterman algorithm running on commodity hardware. It is implemented in the recently released CUDA programming environment by NVidia. CUDA allows direct access to the hardware primitives of the last-generation Graphics Processing Units (GPU) G80. Speeds of more than 3.5 GCUPS (Giga Cell Updates Per Second) are achieved on a workstation running two GeForce 8800 GTX. Exhaustive tests have been done to compare our implementation to SSEARCH and BLAST, running on a 3 GHz Intel Pentium IV processor. Our solution was also compared to a recently published GPU implementation and to a Single Instruction Multiple Data (SIMD) solution. These tests show that our implementation performs from 2 to 30 times faster than any other previous attempt available on commodity hardware. Conclusions The results show that graphic cards are now sufficiently advanced to be used as efficient hardware accelerators for sequence alignment. Their performance is better than any alternative available on commodity hardware platforms. The solution presented in this paper allows large scale alignments to be performed at low cost, using the exact Smith-Waterman algorithm instead of the largely adopted heuristic approaches. PMID:18387198

  2. The N2HDM under theoretical and experimental scrutiny

    NASA Astrophysics Data System (ADS)

    Mühlleitner, Margarete; Sampaio, Marco O. P.; Santos, Rui; Wittbrodt, Jonas

    2017-03-01

    The N2HDM is based on the CP-conserving 2HDM extended by a real scalar singlet field. Its enlarged parameter space and its fewer symmetry conditions as compared to supersymmetric models allow for an interesting phenomenology compatible with current experimental constraints, while adding to the 2HDM sector the possibility of Higgs-to-Higgs decays with three different Higgs bosons. In this paper the N2HDM is subjected to detailed scrutiny. Regarding the theoretical constraints we implement tests of tree-level perturbativity and vacuum stability. Moreover, we present, for the first time, a thorough analysis of the global minimum of the N2HDM. The model and the theoretical constraints have been implemented in ScannerS, and we provide N2HDECAY, a code based on HDECAY, for the computation of the N2HDM branching ratios and total widths including the state-of-the-art higher order QCD corrections and off-shell decays. We then perform an extensive parameter scan in the N2HDM parameter space, with all theoretical and experimental constraints applied, and analyse its allowed regions. We find that large singlet admixtures are still compatible with the Higgs data and investigate which observables will allow to restrict the singlet nature most effectively in the next runs of the LHC. Similarly to the 2HDM, the N2HDM exhibits a wrong-sign parameter regime, which will be constrained by future Higgs precision measurements.

  3. Automation of electromagnetic compatability (EMC) test facilities

    NASA Technical Reports Server (NTRS)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  4. Korthagen's ALACT Model: Application and Modification in the Science Project "Kolumbus-Kids"

    ERIC Educational Resources Information Center

    Wegner, Claas; Weber, Phillip; Ohlberger, Stephanie

    2014-01-01

    In order to improve one's teaching in the long run, reflection on the lessons makes up an integral part in the process of developing a sufficient reflection competence. The problem, however, is how this reflection competence can be established in student teachers already and if that concept is compatible with current systems of teacher education…

  5. Bringing Interactivity to the Web: The JAVA Solution.

    ERIC Educational Resources Information Center

    Knee, Richard H.; Cafolla, Ralph

    Java is an object-oriented programming language of the Internet. It's popularity lies in its ability to create interactive Web sites across platforms. The most common Java programs are applications and applets, which adhere to a set of conventions that lets them run within a Java-compatible browser. Java is becoming an essential subject matter and…

  6. Design and analysis of a tendon-based computed tomography-compatible robot with remote center of motion for lung biopsy.

    PubMed

    Yang, Yunpeng; Jiang, Shan; Yang, Zhiyong; Yuan, Wei; Dou, Huaisu; Wang, Wei; Zhang, Daguang; Bian, Yuan

    2017-04-01

    Nowadays, biopsy is a decisive method of lung cancer diagnosis, whereas lung biopsy is time-consuming, complex and inaccurate. So a computed tomography-compatible robot for rapid and precise lung biopsy is developed in this article. According to the actual operation process, the robot is divided into two modules: 4-degree-of-freedom position module for location of puncture point is appropriate for patient's almost all positions and 3-degree-of-freedom tendon-based orientation module with remote center of motion is compact and computed tomography-compatible to orientate and insert needle automatically inside computed tomography bore. The workspace of the robot surrounds patient's thorax, and the needle tip forms a cone under patient's skin. A new error model of the robot based on screw theory is proposed in view of structure error and actuation error, which are regarded as screw motions. Simulation is carried out to verify the precision of the error model contrasted with compensation via inverse kinematics. The results of insertion experiment on specific phantom prove the feasibility of the robot with mean error of 1.373 mm in laboratory environment, which is accurate enough to replace manual operation.

  7. Simulation of LHC events on a millions threads

    NASA Astrophysics Data System (ADS)

    Childers, J. T.; Uram, T. D.; LeCompte, T. J.; Papka, M. E.; Benjamin, D. P.

    2015-12-01

    Demand for Grid resources is expected to double during LHC Run II as compared to Run I; the capacity of the Grid, however, will not double. The HEP community must consider how to bridge this computing gap by targeting larger compute resources and using the available compute resources as efficiently as possible. Argonne's Mira, the fifth fastest supercomputer in the world, can run roughly five times the number of parallel processes that the ATLAS experiment typically uses on the Grid. We ported Alpgen, a serial x86 code, to run as a parallel application under MPI on the Blue Gene/Q architecture. By analysis of the Alpgen code, we reduced the memory footprint to allow running 64 threads per node, utilizing the four hardware threads available per core on the PowerPC A2 processor. Event generation and unweighting, typically run as independent serial phases, are coupled together in a single job in this scenario, reducing intermediate writes to the filesystem. By these optimizations, we have successfully run LHC proton-proton physics event generation at the scale of a million threads, filling two-thirds of Mira.

  8. Running Jobs on the Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    on the Peregrine high-performance computing (HPC) system. Running Different Types of Jobs Batch jobs scheduling policies - queue names, limits, etc. Requesting different node types Sample batch scripts

  9. Brain-Compatible Learning: Principles and Applications in Athletic Training

    PubMed Central

    2003-01-01

    Objective: To discuss the principles of brain-compatible learning research and provide insights into how this research may be applied in athletic training education to benefit the profession. Background: In the past decade, new brain-imaging techniques have allowed us to observe the brain while it is learning. The field of neuroscience has produced a body of empirical data that provides a new understanding of how we learn. This body of data has implications in education, although the direct study of these implications is in its infancy. Description: An overview of how the brain learns at a cellular level is provided, followed by a discussion of the principles of brain-compatible learning. Applications of these principles and implications for the field of athletic training education are also offered. Application: Many educational-reform fads have garnered attention in the past. Brain-compatible learning will not likely be one of those, as its origin is in neuroscience, not education. Brain-compatible learning is not an educational-reform movement. It does not prescribe how to run your classroom or offer specific techniques to use. Rather, it provides empirical data about how the brain learns and suggests guidelines to be considered while preparing lessons for your students. These guidelines may be incorporated into every educational setting, with every type of curriculum and every age group. The field of athletic training lends itself well to many of the basic principles of brain-compatible learning. PMID:16558681

  10. Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories

    NASA Technical Reports Server (NTRS)

    Ng, Hok Kwan; Sridhar, Banavar

    2016-01-01

    This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.

  11. WinHPC System | High-Performance Computing | NREL

    Science.gov Websites

    System WinHPC System NREL's WinHPC system is a computing cluster running the Microsoft Windows operating system. It allows users to run jobs requiring a Windows environment such as ANSYS and MATLAB

  12. Missileborne Artificial Vision System (MAVIS)

    NASA Technical Reports Server (NTRS)

    Andes, David K.; Witham, James C.; Miles, Michael D.

    1994-01-01

    Several years ago when INTEL and China Lake designed the ETANN chip, analog VLSI appeared to be the only way to do high density neural computing. In the last five years, however, digital parallel processing chips capable of performing neural computation functions have evolved to the point of rough equality with analog chips in system level computational density. The Naval Air Warfare Center, China Lake, has developed a real time, hardware and software system designed to implement and evaluate biologically inspired retinal and cortical models. The hardware is based on the Adaptive Solutions Inc. massively parallel CNAPS system COHO boards. Each COHO board is a standard size 6U VME card featuring 256 fixed point, RISC processors running at 20 MHz in a SIMD configuration. Each COHO board has a companion board built to support a real time VSB interface to an imaging seeker, a NTSC camera, and to other COHO boards. The system is designed to have multiple SIMD machines each performing different corticomorphic functions. The system level software has been developed which allows a high level description of corticomorphic structures to be translated into the native microcode of the CNAPS chips. Corticomorphic structures are those neural structures with a form similar to that of the retina, the lateral geniculate nucleus, or the visual cortex. This real time hardware system is designed to be shrunk into a volume compatible with air launched tactical missiles. Initial versions of the software and hardware have been completed and are in the early stages of integration with a missile seeker.

  13. A graphical simulation software for instruction in cardiovascular mechanics physiology.

    PubMed

    Wildhaber, Reto A; Verrey, François; Wenger, Roland H

    2011-01-25

    Computer supported, interactive e-learning systems are widely used in the teaching of physiology. However, the currently available complimentary software tools in the field of the physiology of cardiovascular mechanics have not yet been adapted to the latest systems software. Therefore, a simple-to-use replacement for undergraduate and graduate students' education was needed, including an up-to-date graphical software that is validated and field-tested. Software compatible to Windows, based on modified versions of existing mathematical algorithms, has been newly developed. Testing was performed during a full term of physiological lecturing to medical and biology students. The newly developed CLabUZH software models a reduced human cardiovascular loop containing all basic compartments: an isolated heart including an artificial electrical stimulator, main vessels and the peripheral resistive components. Students can alter several physiological parameters interactively. The resulting output variables are printed in x-y diagrams and in addition shown in an animated, graphical model. CLabUZH offers insight into the relations of volume, pressure and time dependency in the circulation and their correlation to the electrocardiogram (ECG). Established mechanisms such as the Frank-Starling Law or the Windkessel Effect are considered in this model. The CLabUZH software is self-contained with no extra installation required and runs on most of today's personal computer systems. CLabUZH is a user-friendly interactive computer programme that has proved to be useful in teaching the basic physiological principles of heart mechanics.

  14. Many-core graph analytics using accelerated sparse linear algebra routines

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron L.; Fox, Paul; Kelmelis, Eric

    2016-05-01

    Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.

  15. Xyce™ Parallel Electronic Simulator Users' Guide, Version 6.5.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik V.; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to developmore » new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The information herein is subject to change without notice. Copyright © 2002-2016 Sandia Corporation. All rights reserved.« less

  16. Analyzing Spacecraft Telecommunication Systems

    NASA Technical Reports Server (NTRS)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  17. Moisture balance over the Iberian Peninsula computed using a high resolution regional climate model. The impact of 3DVAR data assimilation.

    NASA Astrophysics Data System (ADS)

    González-Rojí, Santos J.; Sáenz, Jon; Ibarra-Berastegi, Gabriel

    2016-04-01

    A numerical downscaling exercise over the Iberian Peninsula has been run nesting the WRF model inside ERA Interim. The Iberian Peninsula has been covered by a 15km x 15 km grid with 51 vertical levels. Two model configurations have been tested in two experiments spanning the period 2010-2014 after a one year spin-up (2009). In both cases, the model uses high resolution daily-varying SST fields and the Noah land surface model. In the first experiment (N), after the model is initialised, boundary conditions drive the model, as usual in numerical downscaling experiments. The second experiment (D) is configured the same way as the N case, but 3DVAR data assimilation is run every six hours (00Z, 06Z, 12Z and 18Z) using observations obtained from the PREPBUFR dataset (NCEP ADP Global Upper Air and Surface Weather Observations) using a 120' window around analysis times. For the data assimilation experiment (D), seasonally (monthly) varying background error covariance matrices have been prepared according to the parameterisations used and the mesoscale model domain. For both N and D runs, the moisture balance of the model runs has been evaluated over the Iberian Peninsula, both internally according to the model results (moisture balance in the model) and also in terms of the observed moisture fields from observational datasets (particularly precipitable water and precipitation from observations). Verification has been performed both at the daily and monthly time scales. The verification has also been performed for ERA Interim, the driving coarse-scale dataset used to drive the regional model too. Results show that the leading terms that must be considered over the area are the tendency in the precipitable water column, the divergence of moisture flux, evaporation (computed from latent heat flux at the surface) and precipitation. In the case of ERA Interim, the divergence of Qc is also relevant, although still a minor player in the moisture balance. Both mesoscale model runs are more effective at closing the moisture balance over the whole Iberian Peninsula than ERA Interim. The N experiment (no data assimilation) shows a better closure than the D case, as could be expected from the lack of analysis increments in it. This result is robust both at the daily and monthly time scales. Both ERA Interim and the D experiment produce a negative residual in the balance equation (compatible with excess evaporation or increased convergence of moisture over the Iberian Peninsula). This is a result of the data assimilation process in the D dataset, since in the N experiment the residual is mainly positive. The seasonal cycle of evaporation is much closer in the D experiment to the one in ERA Interim than in the N case, with a higher evaporation during summer months. However, both regional climate model runs show a lower evaporation rate than ERA Interim, particularly during summer months.

  18. Models of Educational Computing @ Home: New Frontiers for Research on Technology in Learning.

    ERIC Educational Resources Information Center

    Kafai, Yasmin B.; Fishman, Barry J.; Bruckman, Amy S.; Rockman, Saul

    2002-01-01

    Discusses models of home educational computing that are linked to learning in school and recommends the need for research that addresses the home as a computer-based learning environment. Topics include a history of research on educational computing at home; technological infrastructure, including software and compatibility; Internet access;…

  19. High Performance Computing (HPC) Innovation Service Portal Pilots Cloud Computing (HPC-ISP Pilot Cloud Computing)

    DTIC Science & Technology

    2011-08-01

    5 Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis...classification of streaming data. Example input images (top left). All digit prototypes (cluster centers) found, with size proportional to frequency (top...Figure 4 Architetural diagram of running Blender on Amazon EC2 through Nimbis 1 http

  20. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  1. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  2. MORPH-I (Ver 1.0) a software package for the analysis of scanning electron micrograph (binary formatted) images for the assessment of the fractal dimension of enclosed pore surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf; Oscarson, Robert

    1998-01-01

    MORPH-I is a set of C-language computer programs for the IBM PC and compatible minicomputers. The programs in MORPH-I are used for the fractal analysis of scanning electron microscope and electron microprobe images of pore profiles exposed in cross-section. The program isolates and traces the cross-sectional profiles of exposed pores and computes the Richardson fractal dimension for each pore. Other programs in the set provide for image calibration, display, and statistical analysis of the computed dimensions for highly complex porous materials. Requirements: IBM PC or compatible; minimum 640 K RAM; mathcoprocessor; SVGA graphics board providing mode 103 display.

  3. A Modular Environment for Geophysical Inversion and Run-time Autotuning using Heterogeneous Computing Systems

    NASA Astrophysics Data System (ADS)

    Myre, Joseph M.

    Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.

  4. 37 CFR 1.96 - Submission of computer program listings.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...

  5. 37 CFR 1.96 - Submission of computer program listings.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...

  6. 37 CFR 1.96 - Submission of computer program listings.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...

  7. 37 CFR 1.96 - Submission of computer program listings.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...

  8. 37 CFR 1.96 - Submission of computer program listings.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Apple Macintosh; (ii) Operating System Compatibility: MS-DOS, MS-Windows, Unix, or Macintosh; (iii) Line... 37 Patents, Trademarks, and Copyrights 1 2012-07-01 2012-07-01 false Submission of computer... Models, Exhibits, Specimens § 1.96 Submission of computer program listings. (a) General. Descriptions of...

  9. Desktop Publishing: Its Impact on Community College Journalism.

    ERIC Educational Resources Information Center

    Grzywacz-Gray, John; And Others

    1987-01-01

    Illustrates the kinds of copy that can be created on Apple Macintosh computers and laser printers. Shows font and type specification options. Discusses desktop publishing costs, potential problems, and computer compatibility. Considers the use of computers in college journalism in production, graphics, accounting, advertising, and promotion. (AYC)

  10. Nonlinear Analysis of a Bolted Marine Riser Connector Using NASTRAN Substructuring

    NASA Technical Reports Server (NTRS)

    Fox, G. L.

    1984-01-01

    Results of an investigation of the behavior of a bolted, flange type marine riser connector is reported. The method used to account for the nonlinear effect of connector separation due to bolt preload and axial tension load is described. The automated multilevel substructing capability of COSMIC/NASTRAN was employed at considerable savings in computer run time. Simplified formulas for computer resources, i.e., computer run times for modules SDCOMP, FBS, and MPYAD, as well as disk storage space, are presented. Actual run time data on a VAX-11/780 is compared with the formulas presented.

  11. Compatibility of buffered uranium carbides with tungsten.

    NASA Technical Reports Server (NTRS)

    Phillips, W. M.

    1971-01-01

    Results of compatibility tests between tungsten and hyperstoichiometric uranium carbide alloys run at 1800 C for 1000 and 2500 hours. These tests compared tungsten-buffered uranium carbide with tungsten-buffered uranium-zirconium carbide. The zirconium carbide addition appeared to widen the homogeneity range of the uranium carbide, making additional carbon available for reaction. Reaction layers could be formed by either of two diffusion paths, one producing UWC2, while the second resulted in the formation of W2C. UWC2 acts as a diffusion barrier for carbon and slows the growth of the reaction layer with time, while carbon diffusion is relatively rapid in W2C, allowing equilibrium to be reached in less than 2500 hours at a temperature of 1800 C.

  12. Magnesium fluoride as energy storage medium for spacecraft solar thermal power systems

    NASA Technical Reports Server (NTRS)

    Lurio, Charles A.

    1992-01-01

    MgF2 was investigated as a phase-change energy-storage material for LEO power systems using solar heat to run thermal cycles. It provides a high heat of fusion per unit mass at a high melting point (1536 K). Theoretical evaluation showed the basic chemical compatibility of liquid MgF2 with refractory metals at 1600 K, though transient high pressures of H2 can occur in a closed container due to reaction with residual moisture. The compatibility was tested in two refractory metal containers for over 2000 h. Some showed no deterioration, while there was evidence that the fluoride reacted with hafnium in others. Corollary tests showed that the MgF2 supercooled by 10-30 K and 50-90 K.

  13. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  14. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  15. Fingerprinting Communication and Computation on HPC Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peisert, Sean

    2010-06-02

    How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less

  16. The strategies WDK: a graphical search interface and web development kit for functional genomics databases

    PubMed Central

    Fischer, Steve; Aurrecoechea, Cristina; Brunk, Brian P.; Gao, Xin; Harb, Omar S.; Kraemer, Eileen T.; Pennington, Cary; Treatman, Charles; Kissinger, Jessica C.; Roos, David S.; Stoeckert, Christian J.

    2011-01-01

    Web sites associated with the Eukaryotic Pathogen Bioinformatics Resource Center (EuPathDB.org) have recently introduced a graphical user interface, the Strategies WDK, intended to make advanced searching and set and interval operations easy and accessible to all users. With a design guided by usability studies, the system helps motivate researchers to perform dynamic computational experiments and explore relationships across data sets. For example, PlasmoDB users seeking novel therapeutic targets may wish to locate putative enzymes that distinguish pathogens from their hosts, and that are expressed during appropriate developmental stages. When a researcher runs one of the approximately 100 searches available on the site, the search is presented as a first step in a strategy. The strategy is extended by running additional searches, which are combined with set operators (union, intersect or minus), or genomic interval operators (overlap, contains). A graphical display uses Venn diagrams to make the strategy’s flow obvious. The interface facilitates interactive adjustment of the component searches with changes propagating forward through the strategy. Users may save their strategies, creating protocols that can be shared with colleagues. The strategy system has now been deployed on all EuPathDB databases, and successfully deployed by other projects. The Strategies WDK uses a configurable MVC architecture that is compatible with most genomics and biological warehouse databases, and is available for download at code.google.com/p/strategies-wdk. Database URL: www.eupathdb.org PMID:21705364

  17. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  18. BALANCER: A Computer Program for Balancing Chemical Equations.

    ERIC Educational Resources Information Center

    Jones, R. David; Schwab, A. Paul

    1989-01-01

    Describes the theory and operation of a computer program which was written to balance chemical equations. Software consists of a compiled file of 46K for use under MS-DOS 2.0 or later on IBM PC or compatible computers. Additional specifications of courseware and availability information are included. (Author/RT)

  19. Active Nodal Task Seeking for High-Performance, Ultra-Dependable Computing

    DTIC Science & Technology

    1994-07-01

    implementation. Figure 1 shows a hardware organization of ANTS: stand-alone computing nodes inter - connected by buses. 2.1 Run Time Partitioning The...nodes in 14 respond to changing loads [27] or system reconfiguration [26]. Existing techniques are all source-initiated or server-initiated [27]. 5.1...short-running task segments. The task segments must be short-running in order that processors will become avalable often enough to satisfy changing

  20. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  1. MACMULTIVIEW 5.1

    NASA Technical Reports Server (NTRS)

    Norikane, L.

    1994-01-01

    MacMultiview is an interactive tool for the Macintosh II family which allows one to display and make computations utilizing polarimetric radar data collected by the Jet Propulsion Laboratory's imaging SAR (synthetic aperture radar) polarimeter system. The system includes the single-frequency L-band sensor mounted on the NASA CV990 aircraft and its replacement, the multi-frequency P-, L-, and C-band sensors mounted on the NASA DC-8. MacMultiview provides two basic functions: computation of synthesized polarimetric images and computation of polarization signatures. The radar data can be used to compute a variety of images. The total power image displays the sum of the polarized and unpolarized components of the backscatter for each pixel. The magnitude/phase difference image displays the HH (horizontal transmit and horizontal receive polarization) to VV (vertical transmit and vertical receive polarization) phase difference using color. Magnitude is displayed using intensity. The user may also select any transmit and receive polarization combination from which an image is synthesized. This image displays the backscatter which would have been observed had the sensor been configured using the selected transmit and receive polarizations. MacMultiview can also be used to compute polarization signatures, three dimensional plots of backscatter versus transmit and receive polarizations. The standard co-polarization signatures (transmit and receive polarizations are the same) and cross-polarization signatures (transmit and receive polarizations are orthogonal) can be plotted for any rectangular subset of pixels within a radar data set. In addition, the ratio of co- and cross-polarization signatures computed from different subsets within the same data set can also be computed. Computed images can be saved in a variety of formats: byte format (headerless format which saves the image as a string of byte values), MacMultiview (a byte image preceded by an ASCII header), and PICT2 format (standard format readable by MacMultiview and other image processing programs for the Macintosh). Images can also be printed on PostScript output devices. Polarization signatures can be saved in either a PICT format or as a text file containing PostScript commands and printed on any QuickDraw output device. The associated Stokes matrices can be stored in a text file. MacMultiview is written in C-language for Macintosh II series computers. MacMultiview will only run on Macintosh II series computers with 8-bit video displays (gray shades or color). The program also requires a minimum configuration of System 6.0, Finder 6.1, and 1Mb of RAM. MacMultiview is NOT compatible with System 7.0. It requires 32-Bit QuickDraw. Note: MacMultiview may not be fully compatible with preliminary versions of 32-Bit QuickDraw. Macintosh Programmer's Workshop and Macintosh Programmer's Workshop C (version 3.0) are required for recompiling and relinking. The standard distribution medium for this package is a set of three 800K 3.5 inch diskettes in Macintosh format. This program was developed in 1989 and updated in 1991. MacMultiview is a copyrighted work with all copyright vested in NASA. QuickDraw, Finder, Macintosh, and System 7 are trademarks of Apple Computer, Inc.

  2. The Impact and Promise of Open-Source Computational Material for Physics Teaching

    NASA Astrophysics Data System (ADS)

    Christian, Wolfgang

    2017-01-01

    A computer-based modeling approach to teaching must be flexible because students and teachers have different skills and varying levels of preparation. Learning how to run the ``software du jour'' is not the objective for integrating computational physics material into the curriculum. Learning computational thinking, how to use computation and computer-based visualization to communicate ideas, how to design and build models, and how to use ready-to-run models to foster critical thinking is the objective. Our computational modeling approach to teaching is a research-proven pedagogy that predates computers. It attempts to enhance student achievement through the Modeling Cycle. This approach was pioneered by Robert Karplus and the SCIS Project in the 1960s and 70s and later extended by the Modeling Instruction Program led by Jane Jackson and David Hestenes at Arizona State University. This talk describes a no-cost open-source computational approach aligned with a Modeling Cycle pedagogy. Our tools, curricular material, and ready-to-run examples are freely available from the Open Source Physics Collection hosted on the AAPT-ComPADRE digital library. Examples will be presented.

  3. Colt: an experiment in wormhole run-time reconfiguration

    NASA Astrophysics Data System (ADS)

    Bittner, Ray; Athanas, Peter M.; Musgrove, Mark

    1996-10-01

    Wormhole run-time reconfiguration (RTR) is an attempt to create a refined computing paradigm for high performance computational tasks. By combining concepts from field programmable gate array (FPGA) technologies with data flow computing, the Colt/Stallion architecture achieves high utilization of hardware resources, and facilitates rapid run-time reconfiguration. Targeted mainly at DSP-type operations, the Colt integrated circuit -- a prototype wormhole RTR device -- compares favorably to contemporary DSP alternatives in terms of silicon area consumed per unit computation and in computing performance. Although emphasis has been placed on signal processing applications, general purpose computation has not been overlooked. Colt is a prototype that defines an architecture not only at the chip level but also in terms of an overall system design. As this system is realized, the concept of wormhole RTR will be applied to numerical computation and DSP applications including those common to image processing, communications systems, digital filters, acoustic processing, real-time control systems and simulation acceleration.

  4. Development and Evaluation of an Expert System for Diagnosing Pest Damage of Red Pine

    Treesearch

    Daniel L Schmoldt; George L. Martin

    1989-01-01

    An expert system for diagnosing pest damage of red pine stands in Wisconsin, PREDICT, runs on IBM or compatible microcomputers and is designed to be useful for field foresters with no advanced training in forest pathology or entomology. PREDICT recognizes 28 damaging agents including species of mammals, insects, and pathogens, as well as two types of abiotic damage....

  5. Automated Report Generation for Research Data Repositories: From i2b2 to PDF.

    PubMed

    Thiemann, Volker S; Xu, Tingyan; Röhrig, Rainer; Majeed, Raphael W

    2017-01-01

    We developed an automated toolchain to generate reports of i2b2 data. It is based on free open source software and runs on a Java Application Server. It is sucessfully used in an ED registry project. The solution is highly configurable and portable to other projects based on i2b2 or compatible factual data sources.

  6. A Registrar Administration System Requirements Analysis and Product Recommendation for Marine Corps University, Quantico, VA

    DTIC Science & Technology

    2011-09-01

    rate Python’s maturity as “High.” Python is nine years old and has been continuously been developed and enhanced since then. During fiscal year 2010...We rate Python’s developer toolkit availability/extensibility as “Yes.” Python runs on a SQL database and is 64 compatible with Oracle database...MODEL...........................................................................11 D. GOAL DEVELOPMENT

  7. Service Test of the Airfield Specialized Trailer System

    DTIC Science & Technology

    1966-10-31

    universal trailer is a lightweight, air-transportable, four- wheel trailer. It is capable of transferring loads to compatible main- tenance and storage...transverse beams). The suspension sys- tem is a specially designed, three-point system which protects loads from excessive wheel displacement when...lightweight steel and can accommodate hoist and lift facilities. Sockets are provided to permit attachment of several accessory kits (running gear caster

  8. Virtualization and cloud computing in dentistry.

    PubMed

    Chow, Frank; Muftu, Ali; Shorter, Richard

    2014-01-01

    The use of virtualization and cloud computing has changed the way we use computers. Virtualization is a method of placing software called a hypervisor on the hardware of a computer or a host operating system. It allows a guest operating system to run on top of the physical computer with a virtual machine (i.e., virtual computer). Virtualization allows multiple virtual computers to run on top of one physical computer and to share its hardware resources, such as printers, scanners, and modems. This increases the efficient use of the computer by decreasing costs (e.g., hardware, electricity administration, and management) since only one physical computer is needed and running. This virtualization platform is the basis for cloud computing. It has expanded into areas of server and storage virtualization. One of the commonly used dental storage systems is cloud storage. Patient information is encrypted as required by the Health Insurance Portability and Accountability Act (HIPAA) and stored on off-site private cloud services for a monthly service fee. As computer costs continue to increase, so too will the need for more storage and processing power. Virtual and cloud computing will be a method for dentists to minimize costs and maximize computer efficiency in the near future. This article will provide some useful information on current uses of cloud computing.

  9. Framework for architecture-independent run-time reconfigurable applications

    NASA Astrophysics Data System (ADS)

    Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.

    2000-10-01

    Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.

  10. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  11. Progress in Machine Learning Studies for the CMS Computing Infrastructure

    DOE PAGES

    Bonacorsi, Daniele; Kuznetsov, Valentin; Magini, Nicolo; ...

    2017-12-06

    Here, computing systems for LHC experiments developed together with Grids worldwide. While a complete description of the original Grid-based infrastructure and services for LHC experiments and its recent evolutions can be found elsewhere, it is worth to mention here the scale of the computing resources needed to fulfill the needs of LHC experiments in Run-1 and Run-2 so far.

  12. Multitasking the code ARC3D. [for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Barton, John T.; Hsiung, Christopher C.

    1986-01-01

    The CRAY multitasking system was developed in order to utilize all four processors and sharply reduce the wall clock run time. This paper describes the techniques used to modify the computational fluid dynamics code ARC3D for this run and analyzes the achieved speedup. The ARC3D code solves either the Euler or thin-layer N-S equations using an implicit approximate factorization scheme. Results indicate that multitask processing can be used to achieve wall clock speedup factors of over three times, depending on the nature of the program code being used. Multitasking appears to be particularly advantageous for large-memory problems running on multiple CPU computers.

  13. Study of the mapping of Navier-Stokes algorithms onto multiple-instruction/multiple-data-stream computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.; Stevens, K.

    1984-01-01

    Implicit approximate-factored algorithms have certain properties that are suitable for parallel processing. A particular computational fluid dynamics (CFD) code, using this algorithm, is mapped onto a multiple-instruction/multiple-data-stream (MIMD) computer architecture. An explanation of this mapping procedure is presented, as well as some of the difficulties encountered when trying to run the code concurrently. Timing results are given for runs on the Ames Research Center's MIMD test facility which consists of two VAX 11/780's with a common MA780 multi-ported memory. Speedups exceeding 1.9 for characteristic CFD runs were indicated by the timing results.

  14. Holograms of a dynamical top quark

    NASA Astrophysics Data System (ADS)

    Clemens, Will; Evans, Nick; Scott, Marc

    2017-09-01

    We present holographic descriptions of dynamical electroweak symmetry breaking models that incorporate the top mass generation mechanism. The models allow computation of the spectrum in the presence of large anomalous dimensions due to walking and strong Nambu-Jona-Lasinio interactions. Technicolor and QCD dynamics are described by the bottom-up Dynamic AdS/QCD model for arbitrary gauge groups and numbers of quark flavors. An assumption about the running of the anomalous dimension of the quark bilinear operator is input, and the model then predicts the spectrum and decay constants for the mesons. We add Nambu-Jona-Lasinio interactions responsible for flavor physics from extended technicolor, top-color, etc., using Witten's multitrace prescription. We show the key behaviors of a top condensation model can be reproduced. We study generation of the top mass in (walking) one doublet and one family technicolor models and with strong extended technicolor interactions. The models clearly reveal the tensions between the large top mass and precision data for δ ρ . The necessary tunings needed to generate a model compatible with precision constraints are simply demonstrated.

  15. BlochSolver: A GPU-optimized fast 3D MRI simulator for experimentally compatible pulse sequences

    NASA Astrophysics Data System (ADS)

    Kose, Ryoichi; Kose, Katsumi

    2017-08-01

    A magnetic resonance imaging (MRI) simulator, which reproduces MRI experiments using computers, has been developed using two graphic-processor-unit (GPU) boards (GTX 1080). The MRI simulator was developed to run according to pulse sequences used in experiments. Experiments and simulations were performed to demonstrate the usefulness of the MRI simulator for three types of pulse sequences, namely, three-dimensional (3D) gradient-echo, 3D radio-frequency spoiled gradient-echo, and gradient-echo multislice with practical matrix sizes. The results demonstrated that the calculation speed using two GPU boards was typically about 7 TFLOPS and about 14 times faster than the calculation speed using CPUs (two 18-core Xeons). We also found that MR images acquired by experiment could be reproduced using an appropriate number of subvoxels, and that 3D isotropic and two-dimensional multislice imaging experiments for practical matrix sizes could be simulated using the MRI simulator. Therefore, we concluded that such powerful MRI simulators are expected to become an indispensable tool for MRI research and development.

  16. GELATIO: a general framework for modular digital analysis of high-purity Ge detector signals

    NASA Astrophysics Data System (ADS)

    Agostini, M.; Pandola, L.; Zavarise, P.; Volynets, O.

    2011-08-01

    GELATIO is a new software framework for advanced data analysis and digital signal processing developed for the GERDA neutrinoless double beta decay experiment. The framework is tailored to handle the full analysis flow of signals recorded by high purity Ge detectors and photo-multipliers from the veto counters. It is designed to support a multi-channel modular and flexible analysis, widely customizable by the user either via human-readable initialization files or via a graphical interface. The framework organizes the data into a multi-level structure, from the raw data up to the condensed analysis parameters, and includes tools and utilities to handle the data stream between the different levels. GELATIO is implemented in C++. It relies upon ROOT and its extension TAM, which provides compatibility with PROOF, enabling the software to run in parallel on clusters of computers or many-core machines. It was tested on different platforms and benchmarked in several GERDA-related applications. A stable version is presently available for the GERDA Collaboration and it is used to provide the reference analysis of the experiment data.

  17. BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods

    PubMed Central

    Auer, Tibor; Churchill, Nathan W.; Flandin, Guillaume; Guntupalli, J. Swaroop; Raffelt, David; Quirion, Pierre-Olivier; Smith, Robert E.; Strother, Stephen C.; Varoquaux, Gaël

    2017-01-01

    The rate of progress in human neurosciences is limited by the inability to easily apply a wide range of analysis methods to the plethora of different datasets acquired in labs around the world. In this work, we introduce a framework for creating, testing, versioning and archiving portable applications for analyzing neuroimaging data organized and described in compliance with the Brain Imaging Data Structure (BIDS). The portability of these applications (BIDS Apps) is achieved by using container technologies that encapsulate all binary and other dependencies in one convenient package. BIDS Apps run on all three major operating systems with no need for complex setup and configuration and thanks to the comprehensiveness of the BIDS standard they require little manual user input. Previous containerized data processing solutions were limited to single user environments and not compatible with most multi-tenant High Performance Computing systems. BIDS Apps overcome this limitation by taking advantage of the Singularity container technology. As a proof of concept, this work is accompanied by 22 ready to use BIDS Apps, packaging a diverse set of commonly used neuroimaging algorithms. PMID:28278228

  18. IgRepertoireConstructor: a novel algorithm for antibody repertoire construction and immunoproteogenomics analysis

    PubMed Central

    Safonova, Yana; Bonissone, Stefano; Kurpilyansky, Eugene; Starostina, Ekaterina; Lapidus, Alla; Stinson, Jeremy; DePalatis, Laura; Sandoval, Wendy; Lill, Jennie; Pevzner, Pavel A.

    2015-01-01

    The analysis of concentrations of circulating antibodies in serum (antibody repertoire) is a fundamental, yet poorly studied, problem in immunoinformatics. The two current approaches to the analysis of antibody repertoires [next generation sequencing (NGS) and mass spectrometry (MS)] present difficult computational challenges since antibodies are not directly encoded in the germline but are extensively diversified by somatic recombination and hypermutations. Therefore, the protein database required for the interpretation of spectra from circulating antibodies is custom for each individual. Although such a database can be constructed via NGS, the reads generated by NGS are error-prone and even a single nucleotide error precludes identification of a peptide by the standard proteomics tools. Here, we present the IgRepertoireConstructor algorithm that performs error-correction of immunosequencing reads and uses mass spectra to validate the constructed antibody repertoires. Availability and implementation: IgRepertoireConstructor is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from http://bioinf.spbau.ru/igtools. Contact: ppevzner@ucsd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26072509

  19. GPU-accelerated Tersoff potentials for massively parallel Molecular Dynamics simulations

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Dac

    2017-03-01

    The Tersoff potential is one of the empirical many-body potentials that has been widely used in simulation studies at atomic scales. Unlike pair-wise potentials, the Tersoff potential involves three-body terms, which require much more arithmetic operations and data dependency. In this contribution, we have implemented the GPU-accelerated version of several variants of the Tersoff potential for LAMMPS, an open-source massively parallel Molecular Dynamics code. Compared to the existing MPI implementation in LAMMPS, the GPU implementation exhibits a better scalability and offers a speedup of 2.2X when run on 1000 compute nodes on the Titan supercomputer. On a single node, the speedup ranges from 2.0 to 8.0 times, depending on the number of atoms per GPU and hardware configurations. The most notable features of our GPU-accelerated version include its design for MPI/accelerator heterogeneous parallelism, its compatibility with other functionalities in LAMMPS, its ability to give deterministic results and to support both NVIDIA CUDA- and OpenCL-enabled accelerators. Our implementation is now part of the GPU package in LAMMPS and accessible for public use.

  20. Networking the Global Maritime Partnership

    DTIC Science & Technology

    2008-06-01

    how do the navies of disparate nations that desire to operate together at sea obtain the requisite, compatible C4ISR (command, control, communications ...compatible C4ISR (command, control, communications , computers, intelligence, surveillance, and reconnaissance) systems that will enable them to truly...partnership. Coalition Naval Operations Maritime coalitions have existed for two and one-half millennia and navies have communicated at sea for

  1. A Method for Transferring Photoelectric Photometry Data from Apple II+ to IBM PC

    NASA Astrophysics Data System (ADS)

    Powell, Harry D.; Miller, James R.; Stephenson, Kipp

    1989-06-01

    A method is presented for transferring photoelectric photometry data files from an Apple II computer to an IBM PC computer in a form which is compatible with the AAVSO Photoelectric Photometry data collection process.

  2. Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing

    PubMed Central

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811

  3. Design and development of a run-time monitor for multi-core architectures in cloud computing.

    PubMed

    Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon

    2011-01-01

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.

  4. EnergyPlus Run Time Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less

  5. Compressed quantum computation using a remote five-qubit quantum computer

    NASA Astrophysics Data System (ADS)

    Hebenstreit, M.; Alsina, D.; Latorre, J. I.; Kraus, B.

    2017-05-01

    The notion of compressed quantum computation is employed to simulate the Ising interaction of a one-dimensional chain consisting of n qubits using the universal IBM cloud quantum computer running on log2(n ) qubits. The external field parameter that controls the quantum phase transition of this model translates into particular settings of the quantum gates that generate the circuit. We measure the magnetization, which displays the quantum phase transition, on a two-qubit system, which simulates a four-qubit Ising chain, and show its agreement with the theoretical prediction within a certain error. We also discuss the relevant point of how to assess errors when using a cloud quantum computer with a limited amount of runs. As a solution, we propose to use validating circuits, that is, to run independent controlled quantum circuits of similar complexity to the circuit of interest.

  6. Running of the spectral index in deformed matter bounce scenarios with Hubble-rate-dependent dark energy

    NASA Astrophysics Data System (ADS)

    Arab, M.; Khodam-Mohammadi, A.

    2018-03-01

    As a deformed matter bounce scenario with a dark energy component, we propose a deformed one with running vacuum model (RVM) in which the dark energy density ρ _{Λ } is written as a power series of H^2 and \\dot{H} with a constant equation of state parameter, same as the cosmological constant, w=-1. Our results in analytical and numerical point of views show that in some cases same as Λ CDM bounce scenario, although the spectral index may achieve a good consistency with observations, a positive value of running of spectral index (α _s) is obtained which is not compatible with inflationary paradigm where it predicts a small negative value for α _s. However, by extending the power series up to H^4, ρ _{Λ }=n_0+n_2 H^2+n_4 H^4, and estimating a set of consistent parameters, we obtain the spectral index n_s, a small negative value of running α _s and tensor to scalar ratio r, which these reveal a degeneracy between deformed matter bounce scenario with RVM-DE and inflationary cosmology.

  7. Interface Provides Standard-Bus Communication

    NASA Technical Reports Server (NTRS)

    Culliton, William G.

    1995-01-01

    Microprocessor-controlled interface (IEEE-488/LVABI) incorporates service-request and direct-memory-access features. Is circuit card enabling digital communication between system called "laser auto-covariance buffer interface" (LVABI) and compatible personal computer via general-purpose interface bus (GPIB) conforming to Institute for Electrical and Electronics Engineers (IEEE) Standard 488. Interface serves as second interface enabling first interface to exploit advantages of GPIB, via utility software written specifically for GPIB. Advantages include compatibility with multitasking and support of communication among multiple computers. Basic concept also applied in designing interfaces for circuits other than LVABI for unidirectional or bidirectional handling of parallel data up to 16 bits wide.

  8. Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP

    NASA Technical Reports Server (NTRS)

    Long, Lyle N.; Brentner, Kenneth S.

    2000-01-01

    This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.

  9. DEC/SOL: solution of dense systems of linear algebraic equations. [In Fortran IV and compass for CDC 7600; DECST and SOLST for CDC STAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hindmarsh, A.C.; Sloan, L.J.; Dubois, P.F.

    1978-12-01

    This report supersedes the original version, dated June 1976. It describes four versions of a pair of subroutines for solving N x N systems of linear algebraic equations. In each case, the first routine, DEC, performs an LU decomposition of the matrix with partial pivoting, and the second, SOL, computes the solution vector by back-substitution. The first version is in Fortran IV, and is derived from routines DECOMP and SOLVE written by C.B. Moler. The second is a version for the CDC 7600 computer using STACKLIB. The third is a hand-coded (Compass) version for the 7600. The fourth is amore » vectorized version for the CDC STAR, renamed DECST and SOLST. Comparative tests on these routines are also described. The Compass version is faster than the others on the 7600 by factors of up to 5. The major revisions to the original report, and to the subroutines described, are an updated description of the availability of each version of DEC/SOL; correction of some errors in the Compass version, as altered so as to be compatible with FTN; and a new STAR version, which runs much faster than the earlier one. The standard Fortran version, the Fortran/STACKLIB version, and the object code generated from the Compass version and available in STACKLIB have not been changed.« less

  10. The STARLINK software collection

    NASA Astrophysics Data System (ADS)

    Penny, A. J.; Wallace, P. T.; Sherman, J. C.; Terret, D. L.

    1993-12-01

    A demonstration will be given of some recent Starlink software. STARLINK is: a network of computers used by UK astronomers; a collection of programs for the calibration and analysis of astronomical data; a team of people giving hardware, software and administrative support. The Starlink Project has been in operation since 1980 to provide UK astronomers with interactive image processing and data reduction facilities. There are now Starlink computer systems at 25 UK locations, serving about 1500 registered users. The Starlink software collection now has about 25 major packages covering a wide range of astronomical data reduction and analysis techniques, as well as many smaller programs and utilities. At the core of most of the packages is a common `software environment', which provides many of the functions which applications need and offers standardized methods of structuring and accessing data. The software environment simplifies programming and support, and makes it easy to use different packages for different stages of the data reduction. Users see a consistent style, and can mix applications without hitting problems of differing data formats. The Project group coordinates the writing and distribution of this software collection, which is Unix based. Outside the UK, Starlink is used at a large number of places, which range from installations at major UK telescopes, which are Starlink-compatible and managed like Starlink sites, to individuals who run only small parts of the Starlink software collection.

  11. Concentrations of indoor pollutants database: User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author's last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user's needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  12. Concentrations of indoor pollutants database: User`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author`s last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user`s needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  13. Xyce Parallel Electronic Simulator Users' Guide Version 6.8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keiter, Eric R.; Aadithya, Karthik Venkatraman; Mei, Ting

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows onemore » to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase$-$ a message passing parallel implementation $-$ which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.« less

  14. Some observations on uranium carbide alloy/tungsten compatibility.

    NASA Technical Reports Server (NTRS)

    Phillips, W. M.

    1972-01-01

    Results of chemical compatibility tests between both pure tungsten and thoriated tungsten run at 1800 C for up to 3300 hours with uranium carbide alloys. Alloying with zirconium carbide appeared to widen the homogeneity range of uranium carbide, making additional carbon available for reaction with the tungsten. Reaction layers were formed both by vapor phase reaction and by physical contact, producing either or both UWC2 and W2C, depending upon the phases present in the starting fuel alloy. Formation of UWC2 results in slow growth of the reaction layer with time, while W2C reaction layers grow rapidly, allowing equilibrium to be reached in less than 2500 hours at 1800 C. Neither the presence of a thermal gradient nor the presence of thoria in the tungsten clad affect the reactions observed.

  15. Magnesium fluoride as energy storage medium for spacecraft solar thermal power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lurio, C.A.

    1992-10-01

    MgF2 was investigated as a phase-change energy-storage material for LEO power systems using solar heat to run thermal cycles. It provides a high heat of fusion per unit mass at a high melting point (1536 K). Theoretical evaluation showed the basic chemical compatibility of liquid MgF2 with refractory metals at 1600 K, though transient high pressures of H2 can occur in a closed container due to reaction with residual moisture. The compatibility was tested in two refractory metal containers for over 2000 h. Some showed no deterioration, while there was evidence that the fluoride reacted with hafnium in others. Corollarymore » tests showed that the MgF2 supercooled by 10-30 K and 50-90 K. 24 refs.« less

  16. Opening the Duke electronic health record to apps: Implementing SMART on FHIR.

    PubMed

    Bloomfield, Richard A; Polo-Wood, Felipe; Mandel, Joshua C; Mandl, Kenneth D

    2017-03-01

    Recognizing a need for our EHR to be highly interoperable, our team at Duke Health enabled our Epic-based electronic health record to be compatible with the Boston Children's project called Substitutable Medical Apps and Reusable Technologies (SMART), which employed Health Level Seven International's (HL7) Fast Healthcare Interoperability Resources (FHIR), commonly known as SMART on FHIR. We created a custom SMART on FHIR-compatible server infrastructure written in Node.js that served two primary functions. First, it handled API management activities such rate-limiting, authorization, auditing, logging, and analytics. Second, it retrieved the EHR data and made it available in a FHIR-compatible format. Finally, we made required changes to the EHR user interface to allow us to integrate several compatible apps into the provider- and patient-facing EHR workflows. After integrating SMART on FHIR into our Epic-based EHR, we demonstrated several types of apps running on the infrastructure. This included both provider- and patient-facing apps as well as apps that are closed source, open source and internally-developed. We integrated the apps into the testing environment of our desktop EHR as well as our patient portal. We also demonstrated the integration of a native iOS app. In this paper, we demonstrate the successful implementation of the SMART and FHIR technologies on our Epic-based EHR and subsequent integration of several compatible provider- and patient-facing apps. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Reviews, Software.

    ERIC Educational Resources Information Center

    Science Teacher, 1988

    1988-01-01

    Reviews two computer software packages for use in physical science, physics, and chemistry classes. Includes "Physics of Model Rocketry" for Apple II, and "Black Box" for Apple II and IBM compatible computers. "Black Box" is designed to help students understand the concept of indirect evidence. (CW)

  18. 47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer... copies of tariffs or reports with their hard copies need not include such tariffs or reports on the disk...

  19. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  20. Counterfactual quantum computation through quantum interrogation

    NASA Astrophysics Data System (ADS)

    Hosten, Onur; Rakher, Matthew T.; Barreiro, Julio T.; Peters, Nicholas A.; Kwiat, Paul G.

    2006-02-01

    The logic underlying the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to surprising effects. Counterfactual computation constitutes a striking example: the potential outcome of a quantum computation can be inferred, even if the computer is not run. Relying on similar arguments to interaction-free measurements (or quantum interrogation), counterfactual computation is accomplished by putting the computer in a superposition of `running' and `not running' states, and then interfering the two histories. Conditional on the as-yet-unknown outcome of the computation, it is sometimes possible to counterfactually infer information about the solution. Here we demonstrate counterfactual computation, implementing Grover's search algorithm with an all-optical approach. It was believed that the overall probability of such counterfactual inference is intrinsically limited, so that it could not perform better on average than random guesses. However, using a novel `chained' version of the quantum Zeno effect, we show how to boost the counterfactual inference probability to unity, thereby beating the random guessing limit. Our methods are general and apply to any physical system, as illustrated by a discussion of trapped-ion systems. Finally, we briefly show that, in certain circumstances, counterfactual computation can eliminate errors induced by decoherence.

  1. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  2. 4273π: bioinformatics education on low cost ARM hardware.

    PubMed

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  3. PinaColada: peptide-inhibitor ant colony ad-hoc design algorithm.

    PubMed

    Zaidman, Daniel; Wolfson, Haim J

    2016-08-01

    Design of protein-protein interaction (PPI) inhibitors is a major challenge in Structural Bioinformatics. Peptides, especially short ones (5-15 amino acid long), are natural candidates for inhibition of protein-protein complexes due to several attractive features such as high structural compatibility with the protein binding site (mimicking the surface of one of the proteins), small size and the ability to form strong hotspot binding connections with the protein surface. Efficient rational peptide design is still a major challenge in computer aided drug design, due to the huge space of possible sequences, which is exponential in the length of the peptide, and the high flexibility of peptide conformations. In this article we present PinaColada, a novel computational method for the design of peptide inhibitors for protein-protein interactions. We employ a version of the ant colony optimization heuristic, which is used to explore the exponential space ([Formula: see text]) of length n peptide sequences, in combination with our fast robotics motivated PepCrawler algorithm, which explores the conformational space for each candidate sequence. PinaColada is being run in parallel, on a DELL PowerEdge 2.8 GHZ computer with 20 cores and 256 GB memory, and takes up to 24 h to design a peptide of 5-15 amino acids length. An online server available at: http://bioinfo3d.cs.tau.ac.il/PinaColada/. danielza@post.tau.ac.il; wolfson@tau.ac.il. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Statistical fingerprinting for malware detection and classification

    DOEpatents

    Prowell, Stacy J.; Rathgeb, Christopher T.

    2015-09-15

    A system detects malware in a computing architecture with an unknown pedigree. The system includes a first computing device having a known pedigree and operating free of malware. The first computing device executes a series of instrumented functions that, when executed, provide a statistical baseline that is representative of the time it takes the software application to run on a computing device having a known pedigree. A second computing device executes a second series of instrumented functions that, when executed, provides an actual time that is representative of the time the known software application runs on the second computing device. The system detects malware when there is a difference in execution times between the first and the second computing devices.

  5. Design and calibration of a vacuum compatible scanning tunneling microscope

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.

    1990-01-01

    A vacuum compatible scanning tunneling microscope was designed and built, capable of imaging solid surfaces with atomic resolution. The single piezoelectric tube design is compact, and makes use of sample mounting stubs standard to a commercially available surface analysis system. Image collection and display is computer controlled, allowing storage of images for further analysis. Calibration results from atomic scale images are presented.

  6. International Symposium on Electromagnetic Compatibility, Wakefield, MA, August 20-22, 1985, Record

    NASA Astrophysics Data System (ADS)

    Various papers on electromagnetic compatibility are presented. The general topics addressed include: EMI transient/impulsive disturbances, electromagnetic shielding, antennas and propagation, measurement technology, anechoic chamber/open site measurements, communications systems, electrostatic discahrge, cables/transmission lines. Also considered are: elecromagnetic environments, antennas, electromagnetic pulse, nonlinear effect, computer/data transmission systems, EMI standards and requirements, enclosures/TEM cells, systems EMC, and test site measurements.

  7. Computing Mass Properties From AutoCAD

    NASA Technical Reports Server (NTRS)

    Jones, A.

    1990-01-01

    Mass properties of structures computed from data in drawings. AutoCAD to Mass Properties (ACTOMP) computer program developed to facilitate quick calculations of mass properties of structures containing many simple elements in such complex configurations as trusses or sheet-metal containers. Mathematically modeled in AutoCAD or compatible computer-aided design (CAD) system in minutes by use of three-dimensional elements. Written in Microsoft Quick-Basic (Version 2.0).

  8. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  9. Magnetic resonance imaging-compatible circular mapping catheter: an in vivo feasibility and safety study.

    PubMed

    Elbes, Delphine; Magat, Julie; Govari, Assaf; Ephrath, Yaron; Vieillot, Delphine; Beeckler, Christopher; Weerasooriya, Rukshen; Jais, Pierre; Quesson, Bruno

    2017-03-01

    Interventional cardiac catheter mapping is routinely guided by X-ray fluoroscopy, although radiation exposure remains a significant concern. Feasibility of catheter ablation for common flutter has recently been demonstrated under magnetic resonance imaging (MRI) guidance. The benefit of catheter ablation under MRI could be significant for complex arrhythmias such as atrial fibrillation (AF), but MRI-compatible multi-electrode catheters such as Lasso have not yet been developed. This study aimed at demonstrating the feasibility and safety of using a multi-electrode catheter [magnetic resonance (MR)-compatible Lasso] during MRI for cardiac mapping. We also aimed at measuring the level of interference between MR and electrophysiological (EP) systems. Experiments were performed in vivo in sheep (N = 5) using a multi-electrode, circular, steerable, MR-compatible diagnostic catheter. The most common MRI sequences (1.5T) relevant for cardiac examination were run with the catheter positioned in the right atrium. High-quality electrograms were recorded while imaging with a maximal signal-to-noise ratio (peak-to-peak signal amplitude/peak-to-peak noise amplitude) ranging from 5.8 to 165. Importantly, MRI image quality was unchanged. Artefacts induced by MRI sequences during mapping were demonstrated to be compatible with clinical use. Phantom data demonstrated that this 10-pole circular catheter can be used safely with a maximum of 4°C increase in temperature. This new MR-compatible 10-pole catheter appears to be safe and effective. Combining MR and multipolar EP in a single session offers the possibility to correlate substrate information (scar, fibrosis) and EP mapping as well as online monitoring of lesion formation and electrical endpoint. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For permissions please email: journals.permissions@oup.com.

  10. BITNET: Past, Present, and Future.

    ERIC Educational Resources Information Center

    Oberst, Daniel J.; Smith, Sheldon B.

    1986-01-01

    Discusses history and development of the academic computer network BITNET, including BITNET Network Support Center's growth and services, and international expansion. Network users, reasons for growth, and future developments are reviewed. A BITNET applications sampler and listings of compatible computers and operating systems, sites, and…

  11. Running R Statistical Computing Environment Software on the Peregrine

    Science.gov Websites

    for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing

  12. Operations analysis (study 2.1). Program listing for the LOVES computer code

    NASA Technical Reports Server (NTRS)

    Wray, S. T., Jr.

    1974-01-01

    A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.

  13. Theory and Programs for Dynamic Modeling of Tree Rings from Climate

    Treesearch

    Paul C. van Deusen; Jennifer Koretz

    1988-01-01

    Computer programs written in GAUSS(TM) for IBM compatible personal computers are described that perform dynamic tree ring modeling with climate data; the underlying theory is also described. The programs and a separate users manual are available from the authors, although users must have the GAUSS software package on their personal computer. An example application of...

  14. NFDRSPC: The National Fire-Danger Rating System on a Personal Computer

    Treesearch

    Bryan G. Donaldson; James T. Paul

    1990-01-01

    This user's guide is an introductory manual for using the 1988 version (Burgan 1988) of the National Fire-Danger Rating System on an IBM PC or compatible computer. NFDRSPC is a window-oriented, interactive computer program that processes observed and forecast weather with fuels data to produce NFDRS indices. Other program features include user-designed display...

  15. Software for computing plant biomass—BIOPAK users guide.

    Treesearch

    Joseph E. Means; Heather A. Hansen; Greg J. Koerper; Paul B Alaback; Mark W. Klopsch

    1994-01-01

    BIOPAK is a menu-driven package of computer programs for IBM-compatible personal computers that calculates the biomass, area, height, length, or volume of plant components (leaves, branches, stem, crown, and roots). The routines were written in FoxPro, Fortran, and C.BIOPAK was created to facilitate linking of a diverse array of vegetation datasets with the...

  16. High Resolution Nature Runs and the Big Data Challenge

    NASA Technical Reports Server (NTRS)

    Webster, W. Phillip; Duffy, Daniel Q.

    2015-01-01

    NASA's Global Modeling and Assimilation Office at Goddard Space Flight Center is undertaking a series of very computationally intensive Nature Runs and a downscaled reanalysis. The nature runs use the GEOS-5 as an Atmospheric General Circulation Model (AGCM) while the reanalysis uses the GEOS-5 in Data Assimilation mode. This paper will present computational challenges from three runs, two of which are AGCM and one is downscaled reanalysis using the full DAS. The nature runs will be completed at two surface grid resolutions, 7 and 3 kilometers and 72 vertical levels. The 7 km run spanned 2 years (2005-2006) and produced 4 PB of data while the 3 km run will span one year and generate 4 BP of data. The downscaled reanalysis (MERRA-II Modern-Era Reanalysis for Research and Applications) will cover 15 years and generate 1 PB of data. Our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS), a specialization of the concept of business process-as-a-service that is an evolving extension of IaaS, PaaS, and SaaS enabled by cloud computing. In this presentation, we will describe two projects that demonstrate this shift. MERRA Analytic Services (MERRA/AS) is an example of cloud-enabled CAaaS. MERRA/AS enables MapReduce analytics over MERRA reanalysis data collection by bringing together the high-performance computing, scalable data management, and a domain-specific climate data services API. NASA's High-Performance Science Cloud (HPSC) is an example of the type of compute-storage fabric required to support CAaaS. The HPSC comprises a high speed Infinib and network, high performance file systems and object storage, and a virtual system environments specific for data intensive, science applications. These technologies are providing a new tier in the data and analytic services stack that helps connect earthbound, enterprise-level data and computational resources to new customers and new mobility-driven applications and modes of work. In our experience, CAaaS lowers the barriers and risk to organizational change, fosters innovation and experimentation, and provides the agility required to meet our customers' increasing and changing needs

  17. Improved programs for DNA and protein sequence analysis on the IBM personal computer and other standard computer systems.

    PubMed Central

    Mount, D W; Conrad, B

    1986-01-01

    We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780

  18. Why not make a PC cluster of your own? 5. AppleSeed: A Parallel Macintosh Cluster for Scientific Computing

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Dauger, Dean E.

    We have constructed a parallel cluster consisting of Apple Macintosh G4 computers running both Classic Mac OS as well as the Unix-based Mac OS X, and have achieved very good performance on numerically intensive, parallel plasma particle-in-cell simulations. Unlike other Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the mainstream of computing.

  19. TRL - A FORMAL TEST REPRESENTATION LANGUAGE AND TOOL FOR FUNCTIONAL TEST DESIGNS

    NASA Technical Reports Server (NTRS)

    Hops, J. M.

    1994-01-01

    A Formal Test Representation Language and Tool for Functional Test Designs (TRL) is an automatic tool and a formal language that is used to implement the Category-Partition Method and produce the specification of test cases in the testing phase of software development. The Category-Partition Method is particularly useful in defining the inputs, outputs and purpose of the test design phase and combines the benefits of choosing normal cases with error exposing properties. Traceability can be maintained quite easily by creating a test design for each objective in the test plan. The effort to transform the test cases into procedures is simplified by using an automatic tool to create the cases based on the test design. The method allows the rapid elimination of undesired test cases from consideration, and easy review of test designs by peer groups. The first step in the category-partition method is functional decomposition, in which the specification and/or requirements are decomposed into functional units that can be tested independently. A secondary purpose of this step is to identify the parameters that affect the behavior of the system for each functional unit. The second step, category analysis, carries the work done in the previous step further by determining the properties or sub-properties of the parameters that would make the system behave in different ways. The designer should analyze the requirements to determine the features or categories of each parameter and how the system may behave if the category were to vary its value. If the parameter undergoing refinement is a data-item, then categories of this data-item may be any of its attributes, such as type, size, value, units, frequency of change, or source. After all the categories for the parameters of the functional unit have been determined, the next step is to partition each category's range space into mutually exclusive values that the category can assume. In choosing partition values, all possible kinds of values should be included, especially the ones that will maximize error detection. The purpose of the final step, partition constraint analysis, is to refine the test design specification so that only the technically effective and economically feasible test cases are implied. TRL is written in C-language to be machine independent. It has been successfully implemented on an IBM PC compatible running MS DOS, a Sun4 series computer running SunOS, an HP 9000/700 series workstation running HP-UX, a DECstation running DEC RISC ULTRIX, and a DEC VAX series computer running VMS. TRL requires 1Mb of disk space and a minimum of 84K of RAM. The documentation is available in electronic form in Word Perfect format. The standard distribution media for TRL is a 5.25 inch 360K MS-DOS format diskette. Alternate distribution media and formats are available upon request. TRL was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  20. Determination of Turboprop Reduction Gearbox System Fatigue Life and Reliability

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Lewicki, David G.; Savage, Michael; Vlcek, Brian L.

    2007-01-01

    Two computational models to determine the fatigue life and reliability of a commercial turboprop gearbox are compared with each other and with field data. These models are (1) Monte Carlo simulation of randomly selected lives of individual bearings and gears comprising the system and (2) two-parameter Weibull distribution function for bearings and gears comprising the system using strict-series system reliability to combine the calculated individual component lives in the gearbox. The Monte Carlo simulation included the virtual testing of 744,450 gearboxes. Two sets of field data were obtained from 64 gearboxes that were first-run to removal for cause, were refurbished and placed back in service, and then were second-run until removal for cause. A series of equations were empirically developed from the Monte Carlo simulation to determine the statistical variation in predicted life and Weibull slope as a function of the number of gearboxes failed. The resultant L(sub 10) life from the field data was 5,627 hr. From strict-series system reliability, the predicted L(sub 10) life was 774 hr. From the Monte Carlo simulation, the median value for the L(sub 10) gearbox lives equaled 757 hr. Half of the gearbox L(sub 10) lives will be less than this value and the other half more. The resultant L(sub 10) life of the second-run (refurbished) gearboxes was 1,334 hr. The apparent load-life exponent p for the roller bearings is 5.2. Were the bearing lives to be recalculated with a load-life exponent p equal to 5.2, the predicted L(sub 10) life of the gearbox would be equal to the actual life obtained in the field. The component failure distribution of the gearbox from the Monte Carlo simulation was nearly identical to that using the strict-series system reliability analysis, proving the compatibility of these methods.

  1. A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation.

    PubMed

    D'Angelo, Robert; Wood, Richard; Lowry, Nathan; Freifeld, Geremy; Huang, Haiyao; Salthouse, Christopher D; Hollosi, Brent; Muresan, Matthew; Uy, Wes; Tran, Nhut; Chery, Armand; Poppe, Dorothy C; Sonkusale, Sameer

    2018-06-27

    Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain. The analog CMOS compatible formulation relies on a pulse width (i.e., time mode) encoding of the pixel data that is compatible with pulse-mode imagers and slope based converters often used in imager designs. This letter begins by discussing this time-mode encoding for implementing neuromorphic architectures. Next, the proposed algorithm is derived. Hardware-oriented optimizations and modifications to this algorithm are proposed and discussed. Next, a metric for quantifying saliency accuracy is proposed, and simulation results of this metric are presented. Finally, an analog synthesis approach for a time-mode architecture is outlined, and postsynthesis transistor-level simulations that demonstrate functionality of an implementation in a modern CMOS process are discussed.

  2. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  3. Home Learning, Technology, and Tomorrow's Workplace.

    ERIC Educational Resources Information Center

    Rieseberg, Rhonda L.

    1995-01-01

    Discusses characteristics and trends of home schools and workplaces. Use of computers and computer applications (CD-ROMS, interactive software, and networking) in home schooling provides a compatible environment for future home-based businesses and telecommuting trends. Sidebars include information on home schools on line; standardized test…

  4. User's instructions for the cardiovascular Walters model

    NASA Technical Reports Server (NTRS)

    Croston, R. C.

    1973-01-01

    The model is a combined, steady-state cardiovascular and thermal model. It was originally developed for interactive use, but was converted to batch mode simulation for the Sigma 3 computer. The model has the purpose to compute steady-state circulatory and thermal variables in response to exercise work loads and environmental factors. During a computer simulation run, several selected variables are printed at each time step. End conditions are also printed at the completion of the run.

  5. A Quantum Computing Approach to Model Checking for Advanced Manufacturing Problems

    DTIC Science & Technology

    2014-07-01

    amount of time. In summary, the tool we developed succeeded in allowing us to produce good solutions for optimization problems that did not fit ...We compared the value of the objective obtained in each run with the known optimal value, and used this information to compute the probability of ...success for each given instance. Then we used this information to compute the expected number of repetitions (or runs) needed to obtain the optimal

  6. Running climate model on a commercial cloud computing environment: A case study using Community Earth System Model (CESM) on Amazon AWS

    NASA Astrophysics Data System (ADS)

    Chen, Xiuhong; Huang, Xianglei; Jiao, Chaoyi; Flanner, Mark G.; Raeker, Todd; Palen, Brock

    2017-01-01

    The suites of numerical models used for simulating climate of our planet are usually run on dedicated high-performance computing (HPC) resources. This study investigates an alternative to the usual approach, i.e. carrying out climate model simulations on commercially available cloud computing environment. We test the performance and reliability of running the CESM (Community Earth System Model), a flagship climate model in the United States developed by the National Center for Atmospheric Research (NCAR), on Amazon Web Service (AWS) EC2, the cloud computing environment by Amazon.com, Inc. StarCluster is used to create virtual computing cluster on the AWS EC2 for the CESM simulations. The wall-clock time for one year of CESM simulation on the AWS EC2 virtual cluster is comparable to the time spent for the same simulation on a local dedicated high-performance computing cluster with InfiniBand connections. The CESM simulation can be efficiently scaled with the number of CPU cores on the AWS EC2 virtual cluster environment up to 64 cores. For the standard configuration of the CESM at a spatial resolution of 1.9° latitude by 2.5° longitude, increasing the number of cores from 16 to 64 reduces the wall-clock running time by more than 50% and the scaling is nearly linear. Beyond 64 cores, the communication latency starts to outweigh the benefit of distributed computing and the parallel speedup becomes nearly unchanged.

  7. OpenSimulator Interoperability with DRDC Simulation Tools: Compatibility Study

    DTIC Science & Technology

    2014-09-01

    into two components: (1) backend data services consisting of user accounts, login service, assets, and inventory; and (2) the simulator server which...components are combined into a single OpenSimulator process. In grid mode, the two components are separated, placing the backend services into a ROBUST... mobile devices. Potential points of compatibility between Unity and OpenSimulator include: a Unity-based desktop computer OpenSimulator viewer; a

  8. XMM-Newton Science Analysis Software: How to Bring New Technologies to Long-life Satellite Missions

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Calle, I.; Gabriel, C.; Salgado, J.; Osuna, P.

    2009-09-01

    We present here the beta version of the Remote Interface to SAS Analysis (RISA), a web service-based system that allows users the analysis of XMM-Newton data making use of all of the existing SAS functionalities. RISA takes advantage of GRID architecture to run SAS, achieving high performance in resource management. We are also making the SAS remote analysis compatible with present and future VO standards.

  9. Message Prioritization for Routing in a DTN Environment

    DTIC Science & Technology

    2011-03-01

    goal was to effectively connect a remote village with a connected area, in this case a city, by using android phones given to people who regularly... Android phone or iPhone would involve developing or recoding the entire system in a language that is compatible with the phones. While certainly a...due to the fact it currently runs on an Android platform as an application. This showed promise initially; however, we discovered that the actual

  10. Running Batch Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes incompatibility and get the job running. More information about requesting different node types in Peregrine is available. Queues In order to meet the needs of different types of jobs, nodes on Peregrine are available

  11. Host-Nation Operations: Soldier Training on Governance (HOST-G) Training Support Package

    DTIC Science & Technology

    2011-07-01

    restricted this webpage from running scripts or ActiveX controls that could access your computer. Click here for options…” • If this occurs, select that...scripts and ActiveX controls can be useful, but active content might also harm your computer. Are you sure you want to let this file run active

  12. 24 CFR 15.110 - What fees will HUD charge?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... duplicating machinery. The computer run time includes the cost of operating a central processing unit for that... Applies. (6) Computer run time (includes only mainframe search time not printing) The direct cost of... estimated fee is more than $250.00 or you have a history of failing to pay FOIA fees to HUD in a timely...

  13. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  14. CMOS-compatible photonic devices for single-photon generation

    NASA Astrophysics Data System (ADS)

    Xiong, Chunle; Bell, Bryn; Eggleton, Benjamin J.

    2016-09-01

    Sources of single photons are one of the key building blocks for quantum photonic technologies such as quantum secure communication and powerful quantum computing. To bring the proof-of-principle demonstration of these technologies from the laboratory to the real world, complementary metal-oxide-semiconductor (CMOS)-compatible photonic chips are highly desirable for photon generation, manipulation, processing and even detection because of their compactness, scalability, robustness, and the potential for integration with electronics. In this paper, we review the development of photonic devices made from materials (e.g., silicon) and processes that are compatible with CMOS fabrication facilities for the generation of single photons.

  15. Identification of Program Signatures from Cloud Computing System Telemetry Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.

    Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less

  16. Compact, high-speed algorithm for laying out printed circuit board runs

    NASA Astrophysics Data System (ADS)

    Zapolotskiy, D. Y.

    1985-09-01

    A high speed printed circuit connection layout algorithm is described which was developed within the framework of an interactive system for designing two-sided printed circuit broads. For this reason, algorithm speed was considered, a priori, as a requirement equally as important as the inherent demand for minimizing circuit run lengths and the number of junction openings. This resulted from the fact that, in order to provide psychological man/machine compatibility in the design process, real-time dialog during the layout phase is possible only within limited time frames (on the order of several seconds) for each circuit run. The work was carried out for use on an ARM-R automated work site complex based on an SM-4 minicomputer with a 32K-word memory. This limited memory capacity heightened the demand for algorithm speed and also tightened data file structure and size requirements. The layout algorithm's design logic is analyzed. The structure and organization of the data files are described.

  17. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    PubMed

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  18. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of... these format requirements: (1) Computer Compatibility: IBM PC/XT/AT or Apple Macintosh; (2) Operating...

  19. Computer Simulation of Great Lakes-St. Lawrence Seaway Icebreaker Requirements.

    DTIC Science & Technology

    1980-01-01

    of Run No. 1 for Taconite Task Command ... ....... 6-41 6.22d Results of Run No. I for Oil Can Task Command ........ ... 6-42 6.22e Results of Run No...Port and Period for Run No. 2 ... .. ... ... 6-47 6.23c Results of Run No. 2 for Taconite Task Command ... ....... 6-48 6.23d Results of Run No. 2 for...6-53 6.24b Predicted Icebreaker Fleet by Home Port and Period for Run No. 3 6-54 6.24c Results of Run No. 3 for Taconite Task Command. ....... 6

  20. Is Computer Science Compatible with Technological Literacy?

    ERIC Educational Resources Information Center

    Buckler, Chris; Koperski, Kevin; Loveland, Thomas R.

    2018-01-01

    Although technology education evolved over time, and pressure increased to infuse more engineering principles and increase links to STEM (science technology, engineering, and mathematics) initiatives, there has never been an official alignment between technology and engineering education and computer science. There is movement at the federal level…

  1. MULGRES: a computer program for stepwise multiple regression analysis

    Treesearch

    A. Jeff Martin

    1971-01-01

    MULGRES is a computer program source deck that is designed for multiple regression analysis employing the technique of stepwise deletion in the search for most significant variables. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.

  2. OPMILL - MICRO COMPUTER PROGRAMMING ENVIRONMENT FOR CNC MILLING MACHINES THREE AXIS EQUATION PLOTTING CAPABILITIES

    NASA Technical Reports Server (NTRS)

    Ray, R. B.

    1994-01-01

    OPMILL is a computer operating system for a Kearney and Trecker milling machine that provides a fast and easy way to program machine part manufacture with an IBM compatible PC. The program gives the machinist an "equation plotter" feature which plots any set of equations that define axis moves (up to three axes simultaneously) and converts those equations to a machine milling program that will move a cutter along a defined path. Other supported functions include: drill with peck, bolt circle, tap, mill arc, quarter circle, circle, circle 2 pass, frame, frame 2 pass, rotary frame, pocket, loop and repeat, and copy blocks. The system includes a tool manager that can handle up to 25 tools and automatically adjusts tool length for each tool. It will display all tool information and stop the milling machine at the appropriate time. Information for the program is entered via a series of menus and compiled to the Kearney and Trecker format. The program can then be loaded into the milling machine, the tool path graphically displayed, and tool change information or the program in Kearney and Trecker format viewed. The program has a complete file handling utility that allows the user to load the program into memory from the hard disk, save the program to the disk with comments, view directories, merge a program on the disk with one in memory, save a portion of a program in memory, and change directories. OPMILL was developed on an IBM PS/2 running DOS 3.3 with 1 MB of RAM. OPMILL was written for an IBM PC or compatible 8088 or 80286 machine connected via an RS-232 port to a Kearney and Trecker Data Mill 700/C Control milling machine. It requires a "D:" drive (fixed-disk or virtual), a browse or text display utility, and an EGA or better display. Users wishing to modify and recompile the source code will also need Turbo BASIC, Turbo C, and Crescent Software's QuickPak for Turbo BASIC. IBM PC and IBM PS/2 are registered trademarks of International Business Machines. Turbo BASIC and Turbo C are trademarks of Borland International.

  3. The rid-redundant procedure in C-Prolog

    NASA Technical Reports Server (NTRS)

    Chen, Huo-Yan; Wah, Benjamin W.

    1987-01-01

    C-Prolog can conveniently be used for logical inferences on knowledge bases. However, as similar to many search methods using backward chaining, a large number of redundant computation may be produced in recursive calls. To overcome this problem, the 'rid-redundant' procedure was designed to rid all redundant computations in running multi-recursive procedures. Experimental results obtained for C-Prolog on the Vax 11/780 computer show that there is an order of magnitude improvement in the running time and solvable problem size.

  4. An Upgrade of the Aeroheating Software ''MINIVER''

    NASA Technical Reports Server (NTRS)

    Louderback, Pierce

    2013-01-01

    Detailed computational modeling: CFO often used to create and execute computational domains. Increasing complexity when moving from 20 to 30 geometries. Computational time increased as finer grids are used (accuracy). Strong tool, but takes time to set up and run. MINIVER: Uses theoretical and empirical correlations. Orders of magnitude faster to set up and run. Not as accurate as CFO, but gives reasonable estimations. MINIVER's Drawbacks: Rigid command-line interface. Lackluster, unorganized documentation. No central control; multiple versions exist and have diverged.

  5. A Functional Description of the Geophysical Data Acquisition System

    DTIC Science & Technology

    1990-08-10

    less than 50 SPS nor greater than 250 SPS 3.0 SENSORS/TRANSDUCERS 3.1 CHAPTER OVERVIEW Most of the research supported by GDAS has primarily involved two...signal for the computer. The SRUN signal from the computer is fed to a retriggerable oneshot multivibrator on the board. SRUN consists of a pulse train...that is present when the computer is running. The oneshot output drives the RUN lamp on the front panel. Finally, one pin on the board edge connector is

  6. Network support for system initiated checkpoints

    DOEpatents

    Chen, Dong; Heidelberger, Philip

    2013-01-29

    A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.

  7. TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series

    NASA Astrophysics Data System (ADS)

    Czerwinski, Fabian; Oddershede, Lene B.

    2011-02-01

    With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours

  8. Convergence properties of simple genetic algorithms

    NASA Technical Reports Server (NTRS)

    Bethke, A. D.; Zeigler, B. P.; Strauss, D. M.

    1974-01-01

    The essential parameters determining the behaviour of genetic algorithms were investigated. Computer runs were made while systematically varying the parameter values. Results based on the progress curves obtained from these runs are presented along with results based on the variability of the population as the run progresses.

  9. Modeling Subsurface Reactive Flows Using Leadership-Class Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, Richard T; Hammond, Glenn; Lichtner, Peter

    2009-01-01

    We describe our experiences running PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - on leadership-class supercomputers, including initial experiences running on the petaflop incarnation of Jaguar, the Cray XT5 at the National Center for Computational Sciences at Oak Ridge National Laboratory. PFLOTRAN utilizes fully implicit time-stepping and is built on top of the Portable, Extensible Toolkit for Scientific Computation (PETSc). We discuss some of the hurdles to 'at scale' performance with PFLOTRAN and the progress we have made in overcoming them on leadership-class computer architectures.

  10. Non-exchangeability of running vs. other exercise in their association with adiposity, and its implications for public health recommendations.

    PubMed

    Williams, Paul T

    2012-01-01

    Current physical activity recommendations assume that different activities can be exchanged to produce the same weight-control benefits so long as total energy expended remains the same (exchangeability premise). To this end, they recommend calculating energy expenditure as the product of the time spent performing each activity and the activity's metabolic equivalents (MET), which may be summed to achieve target levels. The validity of the exchangeability premise was assessed using data from the National Runners' Health Study. Physical activity dose was compared to body mass index (BMI) and body circumferences in 33,374 runners who reported usual distance run and pace, and usual times spent running and other exercises per week. MET hours per day (METhr/d) from running was computed from: a) time and intensity, and b) reported distance run (1.02 MET • hours per km). When computed from time and intensity, the declines (slope±SE) per METhr/d were significantly greater (P<10(-15)) for running than non-running exercise for BMI (slopes±SE, male: -0.12 ± 0.00 vs. 0.00±0.00; female: -0.12 ± 0.00 vs. -0.01 ± 0.01 kg/m(2) per METhr/d) and waist circumference (male: -0.28 ± 0.01 vs. -0.07±0.01; female: -0. 31±0.01 vs. -0.05 ± 0.01 cm per METhr/d). Reported METhr/d of running was 38% to 43% greater when calculated from time and intensity than distance. Moreover, the declines per METhr/d run were significantly greater when estimated from reported distance for BMI (males: -0.29 ± 0.01; females: -0.27 ± 0.01 kg/m(2) per METhr/d) and waist circumference (males: -0.67 ± 0.02; females: -0.69 ± 0.02 cm per METhr/d) than when computed from time and intensity (cited above). The exchangeability premise was not supported for running vs. non-running exercise. Moreover, distance-based running prescriptions may provide better weight control than time-based prescriptions for running or other activities. Additional longitudinal studies and randomized clinical trials are required to verify these results prospectively.

  11. A PICKSC Science Gateway for enabling the common plasma physicist to run kinetic software

    NASA Astrophysics Data System (ADS)

    Hu, Q.; Winjum, B. J.; Zonca, A.; Youn, C.; Tsung, F. S.; Mori, W. B.

    2017-10-01

    Computer simulations offer tremendous opportunities for studying plasmas, ranging from simulations for students that illuminate fundamental educational concepts to research-level simulations that advance scientific knowledge. Nevertheless, there is a significant hurdle to using simulation tools. Users must navigate codes and software libraries, determine how to wrangle output into meaningful plots, and oftentimes confront a significant cyberinfrastructure with powerful computational resources. Science gateways offer a Web-based environment to run simulations without needing to learn or manage the underlying software and computing cyberinfrastructure. We discuss our progress on creating a Science Gateway for the Particle-in-Cell and Kinetic Simulation Software Center that enables users to easily run and analyze kinetic simulations with our software. We envision that this technology could benefit a wide range of plasma physicists, both in the use of our simulation tools as well as in its adaptation for running other plasma simulation software. Supported by NSF under Grant ACI-1339893 and by the UCLA Institute for Digital Research and Education.

  12. Creating a Parallel Version of VisIt for Microsoft Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlock, B J; Biagas, K S; Rawson, P L

    2011-12-07

    VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less

  13. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    NASA Astrophysics Data System (ADS)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  14. Computational steering of GEM based detector simulations

    NASA Astrophysics Data System (ADS)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  15. CERN openlab: Engaging industry for innovation in the LHC Run 3-4 R&D programme

    NASA Astrophysics Data System (ADS)

    Girone, M.; Purcell, A.; Di Meglio, A.; Rademakers, F.; Gunne, K.; Pachou, M.; Pavlou, S.

    2017-10-01

    LHC Run3 and Run4 represent an unprecedented challenge for HEP computing in terms of both data volume and complexity. New approaches are needed for how data is collected and filtered, processed, moved, stored and analysed if these challenges are to be met with a realistic budget. To develop innovative techniques we are fostering relationships with industry leaders. CERN openlab is a unique resource for public-private partnership between CERN and leading Information Communication and Technology (ICT) companies. Its mission is to accelerate the development of cutting-edge solutions to be used by the worldwide HEP community. In 2015, CERN openlab started its phase V with a strong focus on tackling the upcoming LHC challenges. Several R&D programs are ongoing in the areas of data acquisition, networks and connectivity, data storage architectures, computing provisioning, computing platforms and code optimisation and data analytics. This paper gives an overview of the various innovative technologies that are currently being explored by CERN openlab V and discusses the long-term strategies that are pursued by the LHC communities with the help of industry in closing the technological gap in processing and storage needs expected in Run3 and Run4.

  16. 47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Specifications as to pleadings, briefs, and other documents; subscription. 1.734 Section 1.734 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer...

  17. 47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Specifications as to pleadings, briefs, and other documents; subscription. 1.734 Section 1.734 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer...

  18. 47 CFR 1.734 - Specifications as to pleadings, briefs, and other documents; subscription.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Specifications as to pleadings, briefs, and other documents; subscription. 1.734 Section 1.734 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... submitted both as hard copies and on computer disk formatted to be compatible with the Commission's computer...

  19. Math Activities Using LogoWriter--Investigations.

    ERIC Educational Resources Information Center

    Flewelling, Gary

    This book is one in a series of teacher resource books developed to: (1) rescue students from the clutches of computers that drill and control; and (2) supply teachers with computer activities compatible with a mathematics program that emphasizes investigation, problem solving, creativity, and hypothesis making and testing. This is not a book…

  20. Web-Compatible Graphics Visualization Framework for Online Instruction and Assessment of Hardware Concepts

    ERIC Educational Resources Information Center

    Chandramouli, Magesh; Chittamuru, Siva-Teja

    2016-01-01

    This paper explains the design of a graphics-based virtual environment for instructing computer hardware concepts to students, especially those at the beginner level. Photorealistic visualizations and simulations are designed and programmed with interactive features allowing students to practice, explore, and test themselves on computer hardware…

  1. REST: a computer system for estimating logging residue by using the line-intersect method

    Treesearch

    A. Jeff Martin

    1975-01-01

    A computer program was designed to accept logging-residue measurements obtained by line-intersect sampling and transform them into summaries useful for the land manager. The features of the program, along with inputs and outputs, are briefly described, with a note on machine compatibility.

  2. Real-World Physics: A Portable MBL for Field Measurements.

    ERIC Educational Resources Information Center

    Albergotti, Clifton

    1994-01-01

    Uses a moderately priced digital multimeter that has output and software compatible with personal computers to make a portable, computer-based data-acquisition system. The system can measure voltage, current, frequency, capacitance, transistor hFE, and temperature. Describes field measures of velocity, acceleration, and temperature as function of…

  3. APQ-102 imaging radar digital image quality study

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1982-01-01

    A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.

  4. Computer Applications for Alternative Assessment: An Instructional and Organization Dilemma.

    ERIC Educational Resources Information Center

    Mills, Ed; Brown, John A.

    1997-01-01

    Describes the possibilities and problems that computer-generated portfolios will soon present to instructors across America. Highlights include the history of portfolio assessment, logistical problems of handling portfolios in the traditional three-ring binder format, use of the zip drive for storage, and software/hardware compatibility problems.…

  5. Memoized Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz

    2012-01-01

    This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.

  6. Reference manual for generation and analysis of Habitat Time Series: version II

    USGS Publications Warehouse

    Milhous, Robert T.; Bartholow, John M.; Updike, Marlys A.; Moos, Alan R.

    1990-01-01

    The selection of an instream flow requirement for water resource management often requires the review of how the physical habitat changes through time. This review is referred to as 'Time Series Analysis." The Tune Series Library (fSLIB) is a group of programs to enter, transform, analyze, and display time series data for use in stream habitat assessment. A time series may be defined as a sequence of data recorded or calculated over time. Examples might be historical monthly flow, predicted monthly weighted usable area, daily electrical power generation, annual irrigation diversion, and so forth. The time series can be analyzed, both descriptively and analytically, to understand the importance of the variation in the events over time. This is especially useful in the development of instream flow needs based on habitat availability. The TSLIB group of programs assumes that you have an adequate study plan to guide you in your analysis. You need to already have knowledge about such things as time period and time step, species and life stages to consider, and appropriate comparisons or statistics to be produced and displayed or tabulated. Knowing your destination, you must first evaluate whether TSLIB can get you there. Remember, data are not answers. This publication is a reference manual to TSLIB and is intended to be a guide to the process of using the various programs in TSLIB. This manual is essentially limited to the hands-on use of the various programs. a TSLIB use interface program (called RTSM) has been developed to provide an integrated working environment where the use has a brief on-line description of each TSLIB program with the capability to run the TSLIB program while in the user interface. For information on the RTSM program, refer to Appendix F. Before applying the computer models described herein, it is recommended that the user enroll in the short course "Problem Solving with the Instream Flow Incremental Methodology (IFIM)." This course is offered by the Aquatic Systems Branch of the National Ecology Research Center. For more information about the TSLIB software, refer to the Memorandum of Understanding. Chapter 1 provides a brief introduction to the Instream Flow Incremental Methodology and TSLIB. Other chapters in this manual provide information on the different aspects of using the models. The information contained in the other chapters includes (2) acquisition, entry, manipulation, and listing of streamflow data; (3) entry, manipulation, and listing of the habitat-versus-streamflow function; (4) transferring streamflow data; (5) water resources systems analysis; (6) generation and analysis of daily streamflow and habitat values; (7) generation of the time series of monthly habitats; (8) manipulation, analysis, and display of month time series data; and (9) generation, analysis, and display of annual time series data. Each section includes documentation for the programs therein with at least one page of information for each program, including a program description, instructions for running the program, and sample output. The Appendixes contain the following: (A) sample file formats; (B) descriptions of default filenames; (C) alphabetical summary of batch-procedure files; (D) installing and running TSLIB on a microcomputer; (E) running TSLIB on a CDC Cyber computer; (F) using the TSLIB user interface program (RTSM); and (G) running WATSTORE on the USGS Amdahl mainframe computer. The number for this version of TSLIB--Version II-- is somewhat arbitrary, as the TSLIB programs were collected into a library some time ago; but operators tended to use and manage them as individual programs. Therefore, we will consider the group of programs from the past that were only on the CDC Cyber computer as Version 0; the programs from the past that were on both the Cyber and the IBM-compatible microcomputer as Version I; and the programs contained in this reference manual as Version II.

  7. Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI

    USGS Publications Warehouse

    Donato, David I.

    2017-01-01

    In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.

  8. Evaluating the Efficacy of the Cloud for Cluster Computation

    NASA Technical Reports Server (NTRS)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  9. MLJ Computer Corner

    ERIC Educational Resources Information Center

    Brink, Dan

    1987-01-01

    Reviews the current state of printing software and printing hardware compatibility and capacity. Discusses the changing relationship between author and publisher resulting from the advent of desktop publishing. (LMO)

  10. Strategies for responding to RAC requests electronically.

    PubMed

    Schramm, Michael

    2012-04-01

    Providers that would like to respond to complex RAC reviews electronically should consider three strategies: Invest in an EHR software package or a high-powered scanner that can quickly scan large amounts of paper. Implement an audit software platform that will allow providers to manage the entire audit process in one place. Use a CONNECT-compatible gateway capable of accessing the Nationwide Health Information Network (the network on which the electronic submission of medical documentation program runs).

  11. A Step into an eco-Compatible Future: Iron- and Cobalt-catalyzed Borrowing Hydrogen Transformation.

    PubMed

    Quintard, Adrien; Rodriguez, Jean

    2016-01-08

    Living on borrowed hydrogen: Recent developments in iron- and cobalt-catalyzed borrowing hydrogen have shown that economically reliable catalysts can be used in this type of waste-free reactions. By using well-defined inexpensive catalysts, known reactions can now be run efficiently without the necessary use of noble metals; however, in addition new types of reactivity can also be discovered. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A Tutorial for SPSS/PC+ Studentware. Study Guide for the Doctor of Arts in Computer-Based Learning.

    ERIC Educational Resources Information Center

    MacFarland, Thomas W.; Hou, Cheng-I

    The purpose of this tutorial is to provide the basic information needed for success with SPSS/PC+ Studentware, a student version of the statistical analysis software offered by SPSS, Inc., for the IBM PC+ and compatible computers. It is intended as a convenient summary of how to organize and conduct the most common computer-based statistical…

  13. Manual on characteristics of Landsat computer-compatible tapes produced by the EROS Data Center digital image processing system

    USGS Publications Warehouse

    Holkenbrink, Patrick F.

    1978-01-01

    Landsat data are received by National Aeronautics and Space Administration (NASA) tracking stations and converted into digital form on high-density tapes (HDTs) by the Image Processing Facility (IPF) at the Goddard Space Flight Center (GSFC), Greenbelt, Maryland. The HDTs are shipped to the EROS Data Center (EDC) where they are converted into customer products by the EROS Data Center digital image processing system (EDIPS). This document describes in detail one of these products: the computer-compatible tape (CCT) produced from Landsat-1, -2, and -3 multispectral scanner (MSS) data and Landsat-3 only return-beam vidicon (RBV) data. Landsat-1 and -2 RBV data will not be processed by IPF/EDIPS to CCT format.

  14. A resistive magnetohydrodynamics solver using modern C++ and the Boost library

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas

    2016-09-01

    In this paper we describe the implementation of our C++ resistive magnetohydrodynamics solver. The framework developed facilitates the separation of the code implementing the specific numerical method and the physical model from the handling of boundary conditions and the management of the computational domain. In particular, this will allow us to use finite difference stencils which are only defined in the interior of the domain (the boundary conditions are handled automatically). We will discuss this and other design considerations and their impact on performance in some detail. In addition, we provide a documentation of the code developed and demonstrate that a performance comparable to Fortran can be achieved, while still maintaining a maximum of code readability and extensibility. Catalogue identifier: AFAH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 592774 No. of bytes in distributed program, including test data, etc.: 43771395 Distribution format: tar.gz Programming language: C++03. Computer: PC, HPC systems. Operating system: POSIX compatible (extensively tested on various Linux systems). In fact only the timing class requires POSIX routines; all other parts of the program can be run on any system where a C++ compiler, Boost, CVODE, and an implementation of BLAS are available. RAM: Hundredths of Kilobytes to Gigabytes (depending on the problem size) Classification: 19.10, 4.3. External routines: Boost, CVODE, either a BLAS library or Intel MKL Nature of problem: An approximate solution to the equations of resistive magnetohydrodynamics for a given initial value and given boundary conditions is computed. Solution method: The discretization is performed using a finite difference approximation in space and the CVODE library in time (which employs a scheme based on the backward differentiation formulas). Restrictions: We consider the 2.5 dimensional case; that is, the magnetic field and the velocity field are three dimensional but all quantities depend only on x and y (but not z). Unusual features: We provide an implementation in C++ using the Boost library that combines high level techniques (which greatly increases code maintainability and extensibility) with performance that is comparable to Fortran implementations. Running time: From seconds to weeks (depending on the problem size).

  15. WinHPC System Programming | High-Performance Computing | NREL

    Science.gov Websites

    Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running

  16. Locomotion Mode Affects the Physiological Strain during Exercise at Walk-Run Transition Speed inElderly Men.

    PubMed

    Freire, Raul; Farinatti, Paulo; Cunha, Felipe; Silva, Brenno; Monteiro, Walace

    2017-07-01

    This study investigated cardiorespiratory responses and rating of perceived exertion (RPE) during prolonged walking and running exercise performed at the walk-run transition speed (WRTS) in untrained healthy elderly men. 20 volunteers (mean±SE, age: 68.4±1.2 yrs; height: 170.0±0.02 cm; body mass: 74.7±2.3 kg) performed the following bouts of exercise: a) maximal cardiopulmonary exercise test (CPET); b) specific protocol to detect WRTS; and c) two 30-min walking and running bouts at WRTS. Expired gases were collected during exercise bouts via the Ultima CardiO 2 metabolic analyzer. Compared to walking, running at the WRTS resulted in higher oxygen uptake (>0.27 L·min -1 ), pulmonary ventilation (>7.7 L·min -1 ), carbon dioxide output (>0.23 L·min -1 ), heart rate (>15 beats·min -1 ), oxygen pulse (>0.88 15 mL·beats -1 ), energy expenditure (>27 kcal) and cost of oxygen transport (>43 mL·kg -1 ·km -1 ·bout -1 ). The increase of overall and local RPEs with exercise duration was similar across locomotion modes (P<0.001). In all participants, %HRR and %VO 2 R throughout walking and running bouts were around or above the gas exchange threshold. In conclusion, elderly men exhibited higher cardiorespiratory responses during 30-min bouts of running than walking at WRTS. Nevertheless, walking corresponded to relative metabolic intensities compatible with preservation or improvement of cardiorespiratory fitness and should be preferable over running at WRTS in the untrained elderly characterized by poor fitness and reduced exercise tolerance. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Role of optical computers in aeronautical control applications

    NASA Technical Reports Server (NTRS)

    Baumbick, R. J.

    1981-01-01

    The role that optical computers play in aircraft control is determined. The optical computer has the potential high speed capability required, especially for matrix/matrix operations. The optical computer also has the potential for handling nonlinear simulations in real time. They are also more compatible with fiber optic signal transmission. Optics also permit the use of passive sensors to measure process variables. No electrical energy need be supplied to the sensor. Complex interfacing between optical sensors and the optical computer is avoided if the optical sensor outputs can be directly processed by the optical computer.

  18. Computer-based testing of the modified essay question: the Singapore experience.

    PubMed

    Lim, Erle Chuen-Hian; Seet, Raymond Chee-Seong; Oh, Vernon M S; Chia, Boon-Lock; Aw, Marion; Quak, Seng-Hock; Ong, Benjamin K C

    2007-11-01

    The modified essay question (MEQ), featuring an evolving case scenario, tests a candidate's problem-solving and reasoning ability, rather than mere factual recall. Although it is traditionally conducted as a pen-and-paper examination, our university has run the MEQ using computer-based testing (CBT) since 2003. We describe our experience with running the MEQ examination using the IVLE, or integrated virtual learning environment (https://ivle.nus.edu.sg), provide a blueprint for universities intending to conduct computer-based testing of the MEQ, and detail how our MEQ examination has evolved since its inception. An MEQ committee, comprising specialists in key disciplines from the departments of Medicine and Paediatrics, was formed. We utilized the IVLE, developed for our university in 1998, as the online platform on which we ran the MEQ. We calculated the number of man-hours (academic and support staff) required to run the MEQ examination, using either a computer-based or pen-and-paper format. With the support of our university's information technology (IT) specialists, we have successfully run the MEQ examination online, twice a year, since 2003. Initially, we conducted the examination with short-answer questions only, but have since expanded the MEQ examination to include multiple-choice and extended matching questions. A total of 1268 man-hours was spent in preparing for, and running, the MEQ examination using CBT, compared to 236.5 man-hours to run it using a pen-and-paper format. Despite being more labour-intensive, our students and staff prefer CBT to the pen-and-paper format. The MEQ can be conducted using a computer-based testing scenario, which offers several advantages over a pen-and-paper format. We hope to increase the number of questions and incorporate audio and video files, featuring clinical vignettes, to the MEQ examination in the near future.

  19. Painting models

    NASA Astrophysics Data System (ADS)

    Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.

    2015-12-01

    The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .

  20. Identifying the impact of G-quadruplexes on Affymetrix 3' arrays using cloud computing.

    PubMed

    Memon, Farhat N; Owen, Anne M; Sanchez-Graillet, Olivia; Upton, Graham J G; Harrison, Andrew P

    2010-01-15

    A tetramer quadruplex structure is formed by four parallel strands of DNA/ RNA containing runs of guanine. These quadruplexes are able to form because guanine can Hoogsteen hydrogen bond to other guanines, and a tetrad of guanines can form a stable arrangement. Recently we have discovered that probes on Affymetrix GeneChips that contain runs of guanine do not measure gene expression reliably. We associate this finding with the likelihood that quadruplexes are forming on the surface of GeneChips. In order to cope with the rapidly expanding size of GeneChip array datasets in the public domain, we are exploring the use of cloud computing to replicate our experiments on 3' arrays to look at the effect of the location of G-spots (runs of guanines). Cloud computing is a recently introduced high-performance solution that takes advantage of the computational infrastructure of large organisations such as Amazon and Google. We expect that cloud computing will become widely adopted because it enables bioinformaticians to avoid capital expenditure on expensive computing resources and to only pay a cloud computing provider for what is used. Moreover, as well as financial efficiency, cloud computing is an ecologically-friendly technology, it enables efficient data-sharing and we expect it to be faster for development purposes. Here we propose the advantageous use of cloud computing to perform a large data-mining analysis of public domain 3' arrays.

  1. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    PubMed

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  2. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.

    1996-01-01

    A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.

  3. PyBetVH: A Python tool for probabilistic volcanic hazard assessment and for generation of Bayesian hazard curves and maps

    NASA Astrophysics Data System (ADS)

    Tonini, Roberto; Sandri, Laura; Anne Thompson, Mary

    2015-06-01

    PyBetVH is a completely new, free, open-source and cross-platform software implementation of the Bayesian Event Tree for Volcanic Hazard (BET_VH), a tool for estimating the probability of any magmatic hazardous phenomenon occurring in a selected time frame, accounting for all the uncertainties. New capabilities of this implementation include the ability to calculate hazard curves which describe the distribution of the exceedance probability as a function of intensity (e.g., tephra load) on a grid of points covering the target area. The computed hazard curves are (i) absolute (accounting for the probability of eruption in a given time frame, and for all the possible vent locations and eruptive sizes) and (ii) Bayesian (computed at different percentiles, in order to quantify the epistemic uncertainty). Such curves allow representation of the full information contained in the probabilistic volcanic hazard assessment (PVHA) and are well suited to become a main input to quantitative risk analyses. PyBetVH allows for interactive visualization of both the computed hazard curves, and the corresponding Bayesian hazard/probability maps. PyBetVH is designed to minimize the efforts of end users, making PVHA results accessible to people who may be less experienced in probabilistic methodologies, e.g. decision makers. The broad compatibility of Python language has also allowed PyBetVH to be installed on the VHub cyber-infrastructure, where it can be run online or downloaded at no cost. PyBetVH can be used to assess any type of magmatic hazard from any volcano. Here we illustrate how to perform a PVHA through PyBetVH using the example of analyzing tephra fallout from the Okataina Volcanic Centre (OVC), New Zealand, and highlight the range of outputs that the tool can generate.

  4. JTMIX - CRYOGENIC MIXED FLUID JOULE-THOMSON ANALYSIS PROGRAM

    NASA Technical Reports Server (NTRS)

    Jones, J. A.

    1994-01-01

    JTMIX was written to allow the prediction of both ideal and realistic properties of mixed gases in the 65-80K temperature range. It allows mixed gas J-T analysis for any fluid combination of neon, nitrogen, various hydrocarbons, argon, oxygen, carbon monoxide, carbon dioxide, and hydrogen sulfide. When used in conjunction with the NIST computer program DDMIX, JTMIX has accurately predicted order-of-magnitude increases in J-T cooling capacities when various hydrocarbons are added to nitrogen, and it predicts nitrogen normal boiling point depressions to as low as 60K when neon is added. JTMIX searches for heat exchanger "pinch points" that can result from insolubility of various components in each other. These points result in numerical solutions that cannot exist. The length of the heat exchanger is searched for such points and, if they exist, the user is warned and the temperatures and heat exchanger effectiveness are corrected to provide a real solution. JTMIX gives very good correlation (within data accuracy) to mixed gas data published by the USSR and data taken by APD for the U.S. Naval Weapons Lab. Data taken at JPL also confirms JTMIX for all cases tested. JTMIX is written in Turbo C for IBM PC compatible computers running MS-DOS. The National Institute of Standards and Technology's (NIST, Gaithersburg, MD, 301-975-2208) computer code DDMIX is required to provide mixed-fluid enthalpy data which is input into JTMIX. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. JTMIX was developed in 1991 and is a copyrighted work with all copyright vested in NASA.

  5. SIP: A Web-Based Astronomical Image Processing Program

    NASA Astrophysics Data System (ADS)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  6. IgRepertoireConstructor: a novel algorithm for antibody repertoire construction and immunoproteogenomics analysis.

    PubMed

    Safonova, Yana; Bonissone, Stefano; Kurpilyansky, Eugene; Starostina, Ekaterina; Lapidus, Alla; Stinson, Jeremy; DePalatis, Laura; Sandoval, Wendy; Lill, Jennie; Pevzner, Pavel A

    2015-06-15

    The analysis of concentrations of circulating antibodies in serum (antibody repertoire) is a fundamental, yet poorly studied, problem in immunoinformatics. The two current approaches to the analysis of antibody repertoires [next generation sequencing (NGS) and mass spectrometry (MS)] present difficult computational challenges since antibodies are not directly encoded in the germline but are extensively diversified by somatic recombination and hypermutations. Therefore, the protein database required for the interpretation of spectra from circulating antibodies is custom for each individual. Although such a database can be constructed via NGS, the reads generated by NGS are error-prone and even a single nucleotide error precludes identification of a peptide by the standard proteomics tools. Here, we present the IgRepertoireConstructor algorithm that performs error-correction of immunosequencing reads and uses mass spectra to validate the constructed antibody repertoires. IgRepertoireConstructor is open source and freely available as a C++ and Python program running on all Unix-compatible platforms. The source code is available from http://bioinf.spbau.ru/igtools. ppevzner@ucsd.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  7. MODFLOW-2000, the U.S. Geological Survey modular ground-water model -- Documentation of MOD-PREDICT for predictions, prediction sensitivity analysis, and evaluation of uncertainty

    USGS Publications Warehouse

    Tonkin, M.J.; Hill, Mary C.; Doherty, John

    2003-01-01

    This document describes the MOD-PREDICT program, which helps evaluate userdefined sets of observations, prior information, and predictions, using the ground-water model MODFLOW-2000. MOD-PREDICT takes advantage of the existing Observation and Sensitivity Processes (Hill and others, 2000) by initiating runs of MODFLOW-2000 and using the output files produced. The names and formats of the MODFLOW-2000 input files are unchanged, such that full backward compatibility is maintained. A new name file and input files are required for MOD-PREDICT. The performance of MOD-PREDICT has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program using the email address available at the web address below. Updates might occasionally be made to this document, to the MOD-PREDICT program, and to MODFLOW- 2000. Users can check for updates on the Internet at URL http://water.usgs.gov/software/ground water.html/.

  8. DOE`s nation-wide system for access control can solve problems for the federal government

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callahan, S.; Tomes, D.; Davis, G.

    1996-07-01

    The U.S. Department of Energy`s (DOE`s) ongoing efforts to improve its physical and personnel security systems while reducing its costs, provide a model for federal government visitor processing. Through the careful use of standardized badges, computer databases, and networks of automated access control systems, the DOE is increasing the security associated with travel throughout the DOE complex, and at the same time, eliminating paperwork, special badging, and visitor delays. The DOE is also improving badge accountability, personnel identification assurance, and access authorization timeliness and accuracy. Like the federal government, the DOE has dozens of geographically dispersed locations run by manymore » different contractors operating a wide range of security systems. The DOE has overcome these obstacles by providing data format standards, a complex-wide virtual network for security, the adoption of a standard high security system, and an open-systems-compatible link for any automated access control system. If the location`s level of security requires it, positive visitor identification is accomplished by personal identification number (PIN) and/or by biometrics. At sites with automated access control systems, this positive identification is integrated into the portals.« less

  9. Investigation of Techniques for Simulating Communications and Tracking Subsystems on Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Deacetis, Louis A.

    1991-01-01

    The need to reduce the costs of Space Station Freedom has resulted in a major redesign and downsizing of the Station in general, and its Communications and Tracking (C&T) components in particular. Earlier models and simulations of the C&T Space-to-Ground Subsystem (SGS) in particular are no longer valid. There thus exists a general need for updated, high fidelity simulations of C&T subsystems. This project explored simulation techniques and methods that might be used in developing new simulations of C&T subsystems, including the SGS. Three requirements were placed on the simulations to be developed: (1) they run on IBM PC/XT/AT compatible computers; (2) they be written in Ada as much as possible; and (3) since control and monitoring of the C&T subsystems will involve communication via a MIL-STD-1553B serial bus, that the possibility of commanding the simulator and monitoring its sensors via that bus be included in the design of the simulator. The result of the project is a prototype of a simulation of the Assembly/Contingency Transponder of the SGS, written in Ada, which can be controlled from another PC via a MIL-STD-1553B bus.

  10. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  11. Energy Frontier Research With ATLAS: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, John; Black, Kevin; Ahlen, Steve

    2016-06-14

    The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections,more » t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).« less

  12. Automatic Data Filter Customization Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  13. Open-source meteor detection software for low-cost single-board computers

    NASA Astrophysics Data System (ADS)

    Vida, D.; Zubović, D.; Šegon, D.; Gural, P.; Cupec, R.

    2016-01-01

    This work aims to overcome the current price threshold of meteor stations which can sometimes deter meteor enthusiasts from owning one. In recent years small card-sized computers became widely available and are used for numerous applications. To utilize such computers for meteor work, software which can run on them is needed. In this paper we present a detailed description of newly-developed open-source software for fireball and meteor detection optimized for running on low-cost single board computers. Furthermore, an update on the development of automated open-source software which will handle video capture, fireball and meteor detection, astrometry and photometry is given.

  14. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    NASA Astrophysics Data System (ADS)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  15. A robust fingerprint matching algorithm based on compatibility of star structures

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Feng, Jufu

    2009-10-01

    In fingerprint verification or identification systems, most minutiae-based matching algorithms suffered from the problems of non-linear distortion and missing or faking minutiae. Local structures such as triangle or k-nearest structure are widely used to reduce the impact of non-linear distortion, but are suffered from missing and faking minutiae. In our proposed method, star structure is used to present local structure. A star structure contains various number of minutiae, thus, it is more robust with missing and faking minutiae. Our method consists of four steps: 1) Constructing star structures at minutia level; 2) Computing similarity score for each structure pair, and eliminating impostor matched pairs which have the low scores. As it is generally assumed that there is only linear distortion in local area, the similarity is defined by rotation and shifting. 3) Voting for remained matched pairs according to the compatibility between them, and eliminating impostor matched pairs which gain few votes. The concept of compatibility is first introduced by Yansong Feng [4], the original definition is only based on triangles. We define the compatibility for star structures to adjust to our proposed algorithm. 4) Computing the matching score, based on the number of matched structures and their voting scores. The score also reflects the fact that, it should get higher score if minutiae match in more intensive areas. Experiments evaluated on FVC 2004 show both effectiveness and efficiency of our methods.

  16. SharP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkata, Manjunath Gorentla; Aderholdt, William F

    The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less

  17. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1977-07-18

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 16 figures, 7 tables.

  18. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1976-10-07

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 8 figures, 4 tables.

  19. User's guide to the Octopus computer network (the SHOC manual)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, C.; Thompson, D.; Whitten, G.

    1975-06-02

    This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers and a broad array of peripheral equipment, from any of 800 remote terminals. Octopus will soon include the Laboratory's STAR-100 computers. 9 figures, 5 tables. (auth)

  20. Software Reviews.

    ERIC Educational Resources Information Center

    Kimball, Jeffrey P.; And Others

    1987-01-01

    Describes a variety of computer software. The packages reviewed include a variety of simulations, a spread sheet, a printer driver and an alternative operating system for DBM.PCs and compatible programs. (BSR)

  1. 76 FR 4456 - Privacy Act of 1974; Report of Modified or Altered System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ..., computer systems analysis and computer programming services. The contractors promptly return data entry... Use Disclosures of Data in the System The Privacy Act allows us to disclose information without an...) for which the information was collected. Any such compatible use of data is known as a ``routine use...

  2. 76 FR 4435 - Privacy Act of 1974; Report of Modified or Altered System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... entry, computer systems analysis and computer programming services. The contractors promptly return data.... Proposed Routine Use Disclosures of Data in the System This System of Records contains information such as... compatible use of data is known as a ``routine use''. The routine uses proposed for this System are...

  3. The Operating System Jungle: Finding a Common Path Keeps Getting More Difficult.

    ERIC Educational Resources Information Center

    Pournelle, Jerry

    1984-01-01

    Describes the computer field before the advent of CP/M (Control Program for Microcomputers), an operating system which facilitated compatibility between different computers. CP/M's functions and flaws and the advent of Apple DOS and UCSD Pascal, two additional widely used operating systems, and the significance of their development are also…

  4. Math Activities Using LogoWriter--Patterns and Designs.

    ERIC Educational Resources Information Center

    Flewelling, Gary

    This book is one in a series of teacher resource books developed to: (1) rescue students from the clutches of computers that drill and control; and (2) supply teachers with computer activities compatible with a mathematics program that emphasizes investigation, problem solving, creativity, and hypothesis making and testing. This is not a book…

  5. Math Activities Using LogoWriter--Numbers & Operations.

    ERIC Educational Resources Information Center

    Flewelling, Gary

    This book is one in a series of teacher resource books developed to: (1) rescue students from the clutches of computers that drill and control; and (2) supply teachers with computer activities compatible with a mathematics program that emphasizes investigation, problem solving, creativity, and hypothesis making and testing. This is not a book…

  6. Massively parallel quantum computer simulator

    NASA Astrophysics Data System (ADS)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray X1E, a SGI Altix 3700 and clusters of PCs running Windows XP. We study the performance of the software by simulating quantum computers containing up to 36 qubits, using up to 4096 processors and up to 1 TB of memory. Our results demonstrate that the simulator exhibits nearly ideal scaling as a function of the number of processors and suggest that the simulation software described in this paper may also serve as benchmark for testing high-end parallel computers.

  7. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  8. Management Sciences Division Annual Report (10th)

    DTIC Science & Technology

    1993-01-01

    of the Weapon System Management Information System (WSMIS). TheI Aircraft Sustainability Model ( ASM ) is the computational technique employed by...provisioning. We enhanced the capabilities of RBIRD by using the Aircraft Sustainability Model ( ASM ) for the spares calculation. ASM offers many... ASM for several years to 3 compute spares for war. It is also fully compatible with the Air Force’s peacetime spares computation system (D041). This

  9. x-y-recording in transmission electron microscopy. A versatile and inexpensive interface to personal computers with application to stereology.

    PubMed

    Rickmann, M; Siklós, L; Joó, F; Wolff, J R

    1990-09-01

    An interface for IBM XT/AT-compatible computers is described which has been designed to read the actual specimen stage position of electron microscopes. The complete system consists of (i) optical incremental encoders attached to the x- and y-stage drivers of the microscope, (ii) two keypads for operator input, (iii) an interface card fitted to the bus of the personal computer, (iv) a standard configuration IBM XT (or compatible) personal computer optionally equipped with a (v) HP Graphic Language controllable colour plotter. The small size of the encoders and their connection to the stage drivers by simple ribbed belts allows an easy adaptation of the system to most electron microscopes. Operation of the interface card itself is supported by any high-level language available for personal computers. By the modular concept of these languages, the system can be customized to various applications, and no computer expertise is needed for actual operation. The present configuration offers an inexpensive attachment, which covers a wide range of applications from a simple notebook to high-resolution (200-nm) mapping of tissue. Since section coordinates can be processed in real-time, stereological estimations can be derived directly "on microscope". This is exemplified by an application in which particle numbers were determined by the disector method.

  10. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  11. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  12. ATLAS@Home: Harnessing Volunteer Computing for HEP

    NASA Astrophysics Data System (ADS)

    Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration

    2015-12-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  13. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE PAGES

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin; ...

    2015-02-19

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  14. Understanding the Performance and Potential of Cloud Computing for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadooghi, Iman; Martin, Jesus Hernandez; Li, Tonglin

    In this paper, commercial clouds bring a great opportunity to the scientific computing area. Scientific applications usually require significant resources, however not all scientists have access to sufficient high-end computing systems, may of which can be found in the Top500 list. Cloud Computing has gained the attention of scientists as a competitive resource to run HPC applications at a potentially lower cost. But as a different infrastructure, it is unclear whether clouds are capable of running scientific applications with a reasonable performance per money spent. This work studies the performance of public clouds and places this performance in context tomore » price. We evaluate the raw performance of different services of AWS cloud in terms of the basic resources, such as compute, memory, network and I/O. We also evaluate the performance of the scientific applications running in the cloud. This paper aims to assess the ability of the cloud to perform well, as well as to evaluate the cost of the cloud running scientific applications. We developed a full set of metrics and conducted a comprehensive performance evlauation over the Amazon cloud. We evaluated EC2, S3, EBS and DynamoDB among the many Amazon AWS services. We evaluated the memory sub-system performance with CacheBench, the network performance with iperf, processor and network performance with the HPL benchmark application, and shared storage with NFS and PVFS in addition to S3. We also evaluated a real scientific computing application through the Swift parallel scripting system at scale. Armed with both detailed benchmarks to gauge expected performance and a detailed monetary cost analysis, we expect this paper will be a recipe cookbook for scientists to help them decide where to deploy and run their scientific applications between public clouds, private clouds, or hybrid clouds.« less

  15. ScreenRecorder: A Utility for Creating Screenshot Video Using Only Original Equipment Manufacturer (OEM) Software on Microsoft Windows Systems

    DTIC Science & Technology

    2015-01-01

    class within Microsoft Visual Studio . 2 It has been tested on and is compatible with Microsoft Vista, 7, and 8 and Visual Studio Express 2008...the ScreenRecorder utility assumes a basic understanding of compiling and running C++ code within Microsoft Visual Studio . This report does not...of Microsoft Visual Studio , the ScreenRecorder utility was developed as a C++ class that can be compiled as a library (static or dynamic) to be

  16. Optical multilayers with an amorphous fluoropolymer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, R.; Loomis, G.E.; Lindsey, E.F.

    1994-07-01

    Multilayered coatings were made by physical vapor deposition (PVD) of a perfluorinated amorphous polymer, Teflon AF2400, together with other optical materials. A high reflector at 1064 run was made with ZnS and AF2400. An all-organic 1064-nm reflector was made from AF2400 and polyethylene. Oxide (HfO{sub 2}, SiO{sub 2}) compatibility was also tested. Each multilayer system adhered to itself. The multilayers were influenced by coating stress and unintentional temperature rises during PVD deposition.

  17. Automatic EMC test system

    NASA Astrophysics Data System (ADS)

    Kasai, R.; Abe, T.; Sano, T.

    Automated electromagnetic compatibility (EMC) tests for spacecraft hardware are described. EMC tests are divided into three categories: compensating measurement and calibration errors, comparison of test results with specification, and fine-frequency searching using predictive interference analysis. The automated system features an RF receiver and transmitter, a control system, and antennas. Trials are run with conducted and radiated emissions and conducted and radiated susceptibility over a frequency range of 0.1-40 GHz with narrow, broad and random broad band noise. The system meets military specifications 1541, 461, and 462.

  18. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices

    PubMed Central

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.

    2018-01-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431

  19. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  20. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    NASA Astrophysics Data System (ADS)

    Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.

    2017-10-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.

  1. RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.

    PubMed

    Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B

    2017-06-01

    Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.

  2. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    NASA Technical Reports Server (NTRS)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  3. Surveillance of industrial processes with correlated parameters

    DOEpatents

    White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.

    1996-12-17

    A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.

  4. TM digital image products for applications. [computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Gunther, F. J.; Abrams, R. B.; Ball, D.

    1984-01-01

    The image characteristics of digital data generated by LANDSAT 4 thematic mapper (TM) are discussed. Digital data from the TM resides in tape files at various stages of image processing. Within each image data file, the image lines are blocked by a factor of either 5 for a computer compatible tape CCT-BT, or 4 for a CCT-AT and CCT-PT; in each format, the image file has a different format. Nominal geometric corrections which provide proper geodetic relationships between different parts of the image are available only for the CCT-PT. It is concluded that detector 3 of band 5 on the TM does not respond; this channel of data needs replacement. The empty bin phenomenon in CCT-AT images results from integer truncations of mixed-mode arithmetric operations.

  5. Flood-plain delineation for Occoquan River, Wolf Run, Sandy Run, Elk Horn Run, Giles Run, Kanes Creek, Racoon Creek, and Thompson Creek, Fairfax County, Virginia

    USGS Publications Warehouse

    Soule, Pat LeRoy

    1978-01-01

    Water-surface profiles of the 25-, 50-, and 100-year recurrence interval discharges have been computed for all streams and reaches of channels in Fairfax County, Virginia, having a drainage area greater than 1 square mile except for Dogue Creek, Little Hunting Creek, and that portion of Cameron Run above Lake Barcroft. Maps having a 2-foot contour interval and a horizontal scale of 1 inch equals 100 feet were used for base on which flood boundaries were delineated for 25-, 50-, and 100-year floods to be expected in each basin under ultimate development conditions. This report is one of a series and presents a discussion of techniques employed in computing discharges and profiles as well as the flood profiles and maps on which flood boundaries have been delineated for the Occoquan River and its tributaries within Fairfax County and those streams on Mason Neck within Fairfax County tributary to the Potomac River. (Woodard-USGS)

  6. ACON: a multipurpose production controller for plasma physics codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, C.

    1983-01-01

    ACON is a BCON controller designed to run large production codes on the CTSS Cray-1 or the LTSS 7600 computers. ACON can also be operated interactively, with input from the user's terminal. The controller can run one code or a sequence of up to ten codes during the same job. Options are available to get and save Mass storage files, to perform Historian file updating operations, to compile and load source files, and to send out print and film files. Special features include ability to retry after Mass failures, backup options for saving files, startup messages for the various codes,more » and ability to reserve specified amounts of computer time after successive code runs. ACON's flexibility and power make it useful for running a number of different production codes.« less

  7. LANDSAT-D accelerated payload correction subsystem output computer compatible tape format

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The NASA GSFC LANDSAT-D Ground Segment (GS) is developing an Accelerated Payload Correction Subsystem (APCS) to provide Thematic Mapper (TM) image correction data to be used outside the GS. This correction data is computed from a subset of the TM Payload Correction Data (PCD), which is downlinked from the spacecraft in a 32 Kbps data stream, and mirror scan correction data (MSCD), which is extracted from the wideband video data. This correction data is generated in the GS Thematic Mapper Mission Management Facility (MMF-T), and is recorded on a 9-track 1600 bit per inch computer compatible tape (CCT). This CCT is known as a APCS Output CCT (AOT). The AOT follows standardized corrections with respect to data formats, record construction and record identification. Applicable documents are delineated; common conventions which are used in further defining the structure, format and content of the AOT are defined; and the structure and content of the AOT are described.

  8. Calculation of the flow field including boundary layer effects for supersonic mixed compression inlets at angles of attack

    NASA Technical Reports Server (NTRS)

    Vadyak, J.; Hoffman, J. D.

    1982-01-01

    The flow field in supersonic mixed compression aircraft inlets at angle of attack is calculated. A zonal modeling technique is employed to obtain the solution which divides the flow field into different computational regions. The computational regions consist of a supersonic core flow, boundary layer flows adjacent to both the forebody/centerbody and cowl contours, and flow in the shock wave boundary layer interaction regions. The zonal modeling analysis is described and some computational results are presented. The governing equations for the supersonic core flow form a hyperbolic system of partial differential equations. The equations for the characteristic surfaces and the compatibility equations applicable along these surfaces are derived. The characteristic surfaces are the stream surfaces, which are surfaces composed of streamlines, and the wave surfaces, which are surfaces tangent to a Mach conoid. The compatibility equations are expressed as directional derivatives along streamlines and bicharacteristics, which are the lines of tangency between a wave surface and a Mach conoid.

  9. Monitoring Error Rates In Illumina Sequencing.

    PubMed

    Manley, Leigh J; Ma, Duanduan; Levine, Stuart S

    2016-12-01

    Guaranteeing high-quality next-generation sequencing data in a rapidly changing environment is an ongoing challenge. The introduction of the Illumina NextSeq 500 and the depreciation of specific metrics from Illumina's Sequencing Analysis Viewer (SAV; Illumina, San Diego, CA, USA) have made it more difficult to determine directly the baseline error rate of sequencing runs. To improve our ability to measure base quality, we have created an open-source tool to construct the Percent Perfect Reads (PPR) plot, previously provided by the Illumina sequencers. The PPR program is compatible with HiSeq 2000/2500, MiSeq, and NextSeq 500 instruments and provides an alternative to Illumina's quality value (Q) scores for determining run quality. Whereas Q scores are representative of run quality, they are often overestimated and are sourced from different look-up tables for each platform. The PPR's unique capabilities as a cross-instrument comparison device, as a troubleshooting tool, and as a tool for monitoring instrument performance can provide an increase in clarity over SAV metrics that is often crucial for maintaining instrument health. These capabilities are highlighted.

  10. An efficient, modular and simple tape archiving solution for LHC Run-3

    NASA Astrophysics Data System (ADS)

    Murray, S.; Bahyl, V.; Cancio, G.; Cano, E.; Kotlyar, V.; Kruse, D. F.; Leduc, J.

    2017-10-01

    The IT Storage group at CERN develops the software responsible for archiving to tape the custodial copy of the physics data generated by the LHC experiments. Physics run 3 will start in 2021 and will introduce two major challenges for which the tape archive software must be evolved. Firstly the software will need to make more efficient use of tape drives in order to sustain the predicted data rate of 150 petabytes per year as opposed to the current 50 petabytes per year. Secondly the software will need to be seamlessly integrated with EOS, which has become the de facto disk storage system provided by the IT Storage group for physics data. The tape storage software for LHC physics run 3 is code named CTA (the CERN Tape Archive). This paper describes how CTA will introduce a pre-emptive drive scheduler to use tape drives more efficiently, will encapsulate all tape software into a single module that will sit behind one or more EOS systems, and will be simpler by dropping support for obsolete backwards compatibility.

  11. QuTiP 2: A Python framework for the dynamics of open quantum systems

    NASA Astrophysics Data System (ADS)

    Johansson, J. R.; Nation, P. D.; Nori, Franco

    2013-04-01

    We present version 2 of QuTiP, the Quantum Toolbox in Python. Compared to the preceding version [J.R. Johansson, P.D. Nation, F. Nori, Comput. Phys. Commun. 183 (2012) 1760.], we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Here we introduce these new features, demonstrate their use, and give a summary of the important backward-incompatible API changes introduced in this version. Catalog identifier: AEMB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 33625 No. of bytes in distributed program, including test data, etc.: 410064 Distribution format: tar.gz Programming language: Python. Computer: i386, x86-64. Operating system: Linux, Mac OSX. RAM: 2+ Gigabytes Classification: 7. External routines: NumPy, SciPy, Matplotlib, Cython Catalog identifier of previous version: AEMB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 1760 Does the new version supercede the previous version?: Yes Nature of problem: Dynamics of open quantum systems Solution method: Numerical solutions to Lindblad, Floquet-Markov, and Bloch-Redfield master equations, as well as the Monte Carlo wave function method. Reasons for new version: Compared to the preceding version we have introduced numerous new features, enhanced performance, and made changes in the Application Programming Interface (API) for improved functionality and consistency within the package, as well as increased compatibility with existing conventions used in other scientific software packages for Python. The most significant new features include efficient solvers for arbitrary time-dependent Hamiltonians and collapse operators, support for the Floquet formalism, and new solvers for Bloch-Redfield and Floquet-Markov master equations. Restrictions: Problems must meet the criteria for using the master equation in Lindblad, Floquet-Markov, or Bloch-Redfield form. Running time: A few seconds up to several tens of hours, depending on size of the underlying Hilbert space.

  12. CMGTooL user's manual

    USGS Publications Warehouse

    Xu, Jingping; Lightsom, Fran; Noble, Marlene A.; Denham, Charles

    2002-01-01

    During the past several years, the sediment transport group in the Coastal and Marine Geology Program (CMGP) of the U. S. Geological Survey has made major revisions to its methodology of processing, analyzing, and maintaining the variety of oceanographic time-series data. First, CMGP completed the transition of the its oceanographic time-series database to a self-documenting NetCDF (Rew et al., 1997) data format. Second, CMGP’s oceanographic data variety and complexity have been greatly expanded from traditional 2-dimensional, single-point time-series measurements (e.g., Electro-magnetic current meters, transmissometers) to more advanced 3-dimensional and profiling time-series measurements due to many new acquisitions of modern instruments such as Acoustic Doppler Current Profiler (RDI, 1996), Acoustic Doppler Velocitimeter, Pulse-Coherence Acoustic Doppler Profiler (SonTek, 2001), Acoustic Bacscatter Sensor (Aquatec, 1001001001001001001). In order to accommodate the NetCDF format of data from the new instruments, a software package of processing, analyzing, and visualizing time-series oceanographic data was developed. It is named CMGTooL. The CMGTooL package contains two basic components: a user-friendly GUI for NetCDF file analysis, processing and manipulation; and a data analyzing program library. Most of the routines in the library are stand-alone programs suitable for batch processing. CMGTooL is written in MATLAB computing language (The Mathworks, 1997), therefore users must have MATLAB installed on their computer in order to use this software package. In addition, MATLAB’s Signal Processing Toolbox is also required by some CMGTooL’s routines. Like most MATLAB programs, all CMGTooL codes are compatible with different computing platforms including PC, MAC, and UNIX machines (Note: CMGTooL has been tested on different platforms that run MATLAB 5.2 (Release 10) or lower versions. Some of the commands related to MAC may not be compatible with later releases of MATLAB). The GUI and some of the library routines call low-level NetCDF file I/O, variable and attribute functions. These NetCDF exclusive functions are supported by a MATLAB toolbox named NetCDF, created by Dr. Charles Denham . This toolbox has to be installed in order to use the CMGTooL GUI. The CMGTooL GUI calls several routines that were initially developed by others. The authors would like to acknowledge the following scientists for their ideas and codes: Dr. Rich Signell (USGS), Dr. Chris Sherwood (USGS), and Dr. Bob Beardsley (WHOI). Many special terms that carry special meanings in either MATLAB or the NetCDF Toolbox are used in this manual. Users are encouraged to read the documents of MATLAB and NetCDF for references.

  13. Time of Day Management for Satellite Communications

    DTIC Science & Technology

    2002-12-01

    with results showing the time management of the satellite oscillators and the cesium-beam clocks. A Windows-compatible computer program written to facilitate the clock management will also be described.

  14. CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.

    2012-12-01

    The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and their multiple threats and stressors, 5) a continental margin modeling initiative, to capture extreme oceanic and atmospheric events generating turbidity currents in the Gulf of Mexico, and 6) a CZO Focus Research Group, to develop compatibility between CSDMS architecture and protocols and Critical Zone Observatory-developed models and data.

  15. Xyce parallel electronic simulator : users' guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less

  16. Experimental Realization of High-Efficiency Counterfactual Computation.

    PubMed

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-21

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  17. Experimental Realization of High-Efficiency Counterfactual Computation

    NASA Astrophysics Data System (ADS)

    Kong, Fei; Ju, Chenyong; Huang, Pu; Wang, Pengfei; Kong, Xi; Shi, Fazhan; Jiang, Liang; Du, Jiangfeng

    2015-08-01

    Counterfactual computation (CFC) exemplifies the fascinating quantum process by which the result of a computation may be learned without actually running the computer. In previous experimental studies, the counterfactual efficiency is limited to below 50%. Here we report an experimental realization of the generalized CFC protocol, in which the counterfactual efficiency can break the 50% limit and even approach unity in principle. The experiment is performed with the spins of a negatively charged nitrogen-vacancy color center in diamond. Taking advantage of the quantum Zeno effect, the computer can remain in the not-running subspace due to the frequent projection by the environment, while the computation result can be revealed by final detection. The counterfactual efficiency up to 85% has been demonstrated in our experiment, which opens the possibility of many exciting applications of CFC, such as high-efficiency quantum integration and imaging.

  18. Running of scalar spectral index in multi-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Jinn-Ouk, E-mail: jinn-ouk.gong@apctp.org

    We compute the running of the scalar spectral index in general multi-field slow-roll inflation. By incorporating explicit momentum dependence at the moment of horizon crossing, we can find the running straightforwardly. At the same time, we can distinguish the contributions from the quasi de Sitter background and the super-horizon evolution of the field fluctuations.

  19. Beauty and the beast: Some perspectives on efficient model analysis, surrogate models, and the future of modeling

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.

    2015-12-01

    For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.

  20. Applied Computational Fluid Dynamics in Support of Aircraft/Store Compatibility and Weapons Integration -2007 Edition

    DTIC Science & Technology

    2007-06-01

    CFD in the AFSEO The SBD, designated GBU - 39 /B, contains a 250 production environment include the requirement for rapid pound warhead, measures 6 feet...Take Off (RATO) Separation." ITEA Conference, Apr 2006. Figure 3. B-52/MOP Figure 1. GBU - 39 /B small diameter bomb computational model Figure 4. MOP

  1. Constructivist-Compatible Beliefs and Practices among U.S. Teachers. Teaching, Learning, and Computing: 1998 National Survey Report #4.

    ERIC Educational Resources Information Center

    Ravitz, Jason L.; Becker, Henry Jay; Wong, YanTien

    This report, the forth in a series from the spring 1998 national survey, "Teaching, Learning, and Computing," examines teachers' survey responses that describe the frequency with which their teaching practice involves those five types of activities and the frequency with which their practice involves more traditional transmission and…

  2. ROMI-RIP: Rough Mill RIP-first simulator user's guide

    Treesearch

    R. Edward Thomas

    1995-01-01

    The ROugh Mill RIP-first simulator (ROMI-RIP) is a computer software package for IBM compatible personal computers that simulates current industrial practices for gang-ripping lumber. This guide shows the user how to set and examine the results of simulations regarding current or proposed mill practices. ROMI-RIP accepts cutting bills with up to 300 different part...

  3. Automated Estimation Of Software-Development Costs

    NASA Technical Reports Server (NTRS)

    Roush, George B.; Reini, William

    1993-01-01

    COSTMODL is automated software development-estimation tool. Yields significant reduction in risk of cost overruns and failed projects. Accepts description of software product developed and computes estimates of effort required to produce it, calendar schedule required, and distribution of effort and staffing as function of defined set of development life-cycle phases. Written for IBM PC(R)-compatible computers.

  4. Comparison of two computer codes for crack growth analysis: NASCRAC Versus NASA/FLAGRO

    NASA Technical Reports Server (NTRS)

    Stallworth, R.; Meyers, C. A.; Stinson, H. C.

    1989-01-01

    Results are presented from the comparison study of two computer codes for crack growth analysis - NASCRAC and NASA/FLAGRO. The two computer codes gave compatible conservative results when the part through crack analysis solutions were analyzed versus experimental test data. Results showed good correlation between the codes for the through crack at a lug solution. For the through crack at a lug solution, NASA/FLAGRO gave the most conservative results.

  5. Generalized environmental control and life support system computer program (G189A) configuration control, phase 2

    NASA Technical Reports Server (NTRS)

    Mcenulty, R. E.

    1977-01-01

    The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.

  6. INHYD: Computer code for intraply hybrid composite design. A users manual

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Sinclair, J. H.

    1983-01-01

    A computer program (INHYD) was developed for intraply hybrid composite design. A users manual for INHYD is presented. In INHYD embodies several composite micromechanics theories, intraply hybrid composite theories, and an integrated hygrothermomechanical theory. The INHYD can be run in both interactive and batch modes. It has considerable flexibility and capability, which the user can exercise through several options. These options are demonstrated through appropriate INHYD runs in the manual.

  7. Topology Optimization for Reducing Additive Manufacturing Processing Distortions

    DTIC Science & Technology

    2017-12-01

    features that curl or warp under thermal load and are subsequently struck by the recoater blade /roller. Support structures act to wick heat away and...was run for 150 iterations. The material properties for all examples were Young’s modulus E = 1 GPa, Poisson’s ratio ν = 0.25, and thermal expansion...the element-birth model is significantly more computationally expensive for a full op- timization run . Consider, the computational complexity of a

  8. MindModeling@Home . . . and Anywhere Else You Have Idle Processors

    DTIC Science & Technology

    2009-12-01

    was SETI @Home. It was established in 1999 for the purpose of demonstrating the utility of “distributed grid computing” by providing a mechanism for...the public imagination, and SETI @Home remains the longest running and one of the most popular volunteer computing projects in the world. This...pursuits. Most of them, including SETI @Home, run on a software architecture called the Berkeley Open Infrastructure for Network Computing (BOINC). Some of

  9. Robust Optimization Design for Turbine Blade-Tip Radial Running Clearance using Hierarchically Response Surface Method

    NASA Astrophysics Data System (ADS)

    Zhiying, Chen; Ping, Zhou

    2017-11-01

    Considering the robust optimization computational precision and efficiency for complex mechanical assembly relationship like turbine blade-tip radial running clearance, a hierarchically response surface robust optimization algorithm is proposed. The distribute collaborative response surface method is used to generate assembly system level approximation model of overall parameters and blade-tip clearance, and then a set samples of design parameters and objective response mean and/or standard deviation is generated by using system approximation model and design of experiment method. Finally, a new response surface approximation model is constructed by using those samples, and this approximation model is used for robust optimization process. The analyses results demonstrate the proposed method can dramatic reduce the computational cost and ensure the computational precision. The presented research offers an effective way for the robust optimization design of turbine blade-tip radial running clearance.

  10. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  11. DualSPHysics: A numerical tool to simulate real breakwaters

    NASA Astrophysics Data System (ADS)

    Zhang, Feng; Crespo, Alejandro; Altomare, Corrado; Domínguez, José; Marzeddu, Andrea; Shang, Shao-ping; Gómez-Gesteira, Moncho

    2018-02-01

    The open-source code DualSPHysics is used in this work to compute the wave run-up in an existing dike in the Chinese coast using realistic dimensions, bathymetry and wave conditions. The GPU computing power of the DualSPHysics allows simulating real-engineering problems that involve complex geometries with a high resolution in a reasonable computational time. The code is first validated by comparing the numerical free-surface elevation, the wave orbital velocities and the time series of the run-up with physical data in a wave flume. Those experiments include a smooth dike and an armored dike with two layers of cubic blocks. After validation, the code is applied to a real case to obtain the wave run-up under different incident wave conditions. In order to simulate the real open sea, the spurious reflections from the wavemaker are removed by using an active wave absorption technique.

  12. Prediction of sound radiated from different practical jet engine inlets

    NASA Technical Reports Server (NTRS)

    Zinn, B. T.; Meyer, W. L.

    1980-01-01

    Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.

  13. Shor's factoring algorithm and modern cryptography. An illustration of the capabilities inherent in quantum computers

    NASA Astrophysics Data System (ADS)

    Gerjuoy, Edward

    2005-06-01

    The security of messages encoded via the widely used RSA public key encryption system rests on the enormous computational effort required to find the prime factors of a large number N using classical (conventional) computers. In 1994 Peter Shor showed that for sufficiently large N, a quantum computer could perform the factoring with much less computational effort. This paper endeavors to explain, in a fashion comprehensible to the nonexpert, the RSA encryption protocol; the various quantum computer manipulations constituting the Shor algorithm; how the Shor algorithm performs the factoring; and the precise sense in which a quantum computer employing Shor's algorithm can be said to accomplish the factoring of very large numbers with less computational effort than a classical computer. It is made apparent that factoring N generally requires many successive runs of the algorithm. Our analysis reveals that the probability of achieving a successful factorization on a single run is about twice as large as commonly quoted in the literature.

  14. Programming the social computer.

    PubMed

    Robertson, David; Giunchiglia, Fausto

    2013-03-28

    The aim of 'programming the global computer' was identified by Milner and others as one of the grand challenges of computing research. At the time this phrase was coined, it was natural to assume that this objective might be achieved primarily through extending programming and specification languages. The Internet, however, has brought with it a different style of computation that (although harnessing variants of traditional programming languages) operates in a style different to those with which we are familiar. The 'computer' on which we are running these computations is a social computer in the sense that many of the elementary functions of the computations it runs are performed by humans, and successful execution of a program often depends on properties of the human society over which the program operates. These sorts of programs are not programmed in a traditional way and may have to be understood in a way that is different from the traditional view of programming. This shift in perspective raises new challenges for the science of the Web and for computing in general.

  15. Glenn Heat Transfer Simulation and Solver Graphical User Interface: Development and Testing

    NASA Technical Reports Server (NTRS)

    Kardamis, Joseph R.

    2004-01-01

    In the Tui ine Branch of the Turbomachinery and Propulsion Systems Division, researching and developing efficient turbine aerothermodynamics technologies is the main objective. Creating effective turbines for jet engines is a process which, if based purely on physical experimental testing, would be extremely expensive. It is for this reason, and also for the reasons of speed and ease, that the Turbine Branch spends a large amount of effort working with simulations of turbines. Specifically, they focus their work on two main fields: Computational Field Dynamics (CFD), and Experimental data analysis. The experimental field involves comparing experimental results to simulated results, whereas the CFD field involves running these simulations. The simulations are applied to aerodynamics and heat transfer cases, for both steady and unsteady flow conditions. By and large this work is applied to the domain of flow and heat transfer in axial turbines. The main application used to run these heat flow simulations is GlennHT. This program, recently rewritten in FORTRAN 90, allows the user to input a job file which specifies all the necessary parameters needed to simulate flow through a user-defined grid. There are several other executables used as well, ranging in application from converting grid files to and from particular formats, to merging blocks in a connectivity file, to converting connectivity files to a GlennHT compatible format. All of these executables are run from the command line in a terminal; some of them have interactive prompts where the user must specify the files to be manipulated after the program starts, while others take all of their parameters from the command line. With this amount of variation comes a good deal of commands and formats to memorize, which can cause slower and less efficient work, as users may forget how to execute a certain program, or not remember the pathnames of the files they wish to use. Two years ago, steps were made to expedite this process with a graphical user interface (GUI) that combines the functionality of all the executables along with adding some new functionality, such as residuals graphing and boundary conditions creation. Upon my beginning here at Glenn, many parts of the GUI, which was developed in Java, were nonfunctional. There were also issues with cross-platforming, as systems in the branch were transitioning from Silicon Graphics (SGI) machines to Linux machines. My goals this summer are to finish the parts of the GUI that are not yet completed, fix parts that did not work correctly, expand the functionality to include other useful features, such as grid surface highlighting, and make the system compatible with both Linux and SGI. I will also be heavily testing the system and providing sufficient documentation on how to use the GUI, as no such documentation existed previously.

  16. Computer program for the IBM personal computer which searches for approximate matches to short oligonucleotide sequences in long target DNA sequences.

    PubMed Central

    Myers, E W; Mount, D W

    1986-01-01

    We describe a program which may be used to find approximate matches to a short predefined DNA sequence in a larger target DNA sequence. The program predicts the usefulness of specific DNA probes and sequencing primers and finds nearly identical sequences that might represent the same regulatory signal. The program is written in the C programming language and will run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The program has been integrated into an existing software package for the IBM personal computer (see article by Mount and Conrad, this volume). Some examples of its use are given. PMID:3753785

  17. AGIS: Evolution of Distributed Computing information system for ATLAS

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  18. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  19. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    ERIC Educational Resources Information Center

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  20. Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.; Feikema, Douglas A.

    2003-01-01

    This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.

  1. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    NASA Astrophysics Data System (ADS)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  2. Scalable load balancing for massively parallel distributed Monte Carlo particle transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M. J.; Brantley, P. S.; Joy, K. I.

    2013-07-01

    In order to run computer simulations efficiently on massively parallel computers with hundreds of thousands or millions of processors, care must be taken that the calculation is load balanced across the processors. Examining the workload of every processor leads to an unscalable algorithm, with run time at least as large as O(N), where N is the number of processors. We present a scalable load balancing algorithm, with run time 0(log(N)), that involves iterated processor-pair-wise balancing steps, ultimately leading to a globally balanced workload. We demonstrate scalability of the algorithm up to 2 million processors on the Sequoia supercomputer at Lawrencemore » Livermore National Laboratory. (authors)« less

  3. Performance of a supercharged direct-injection stratified-charge rotary combustion engine

    NASA Technical Reports Server (NTRS)

    Bartrand, Timothy A.; Willis, Edward A.

    1990-01-01

    A zero-dimensional thermodynamic performance computer model for direct-injection stratified-charge rotary combustion engines was modified and run for a single rotor supercharged engine. Operating conditions for the computer runs were a single boost pressure and a matrix of speeds, loads and engine materials. A representative engine map is presented showing the predicted range of efficient operation. After discussion of the engine map, a number of engine features are analyzed individually. These features are: heat transfer and the influence insulating materials have on engine performance and exhaust energy; intake manifold pressure oscillations and interactions with the combustion chamber; and performance losses and seal friction. Finally, code running times and convergence data are presented.

  4. Decrease in Ground-Run Distance of Small Airplanes by Applying Electrically-Driven Wheels

    NASA Astrophysics Data System (ADS)

    Kobayashi, Hiroshi; Nishizawa, Akira

    A new takeoff method for small airplanes was proposed. Ground-roll performance of an airplane driven by electrically-powered wheels was experimentally and computationally studied. The experiments verified that the ground-run distance was decreased by half with a combination of the powered driven wheels and propeller without increase of energy consumption during the ground-roll. The computational analysis showed the ground-run distance of the wheel-driven aircraft was independent of the motor power when the motor capability exceeded the friction between tires and ground. Furthermore, the distance was minimized when the angle of attack was set to the value so that the wing generated negative lift.

  5. Cobalt complexes as internal standards for capillary zone electrophoresis-mass spectrometry studies in biological inorganic chemistry.

    PubMed

    Holtkamp, Hannah U; Morrow, Stuart J; Kubanik, Mario; Hartinger, Christian G

    2017-07-01

    Run-by-run variations are very common in capillary electrophoretic (CE) separations and cause imprecision in both the migration times and the peak areas. This makes peak and kinetic trend identification difficult and error prone. With the aim to identify suitable standards for CE separations which are compatible with the common detectors UV, ESI-MS, and ICP-MS, the Co III complexes [Co(en) 3 ]Cl 3 , [Co(acac) 3 ] and K[Co(EDTA)] were evaluated as internal standards in the reaction of the anticancer drug cisplatin and guanosine 5'-monophosphate as an example of a classical biological inorganic chemistry experiment. These Co III chelate complexes were considered for their stability, accessibility, and the low detection limit for Co in ICP-MS. Furthermore, the Co III complexes are positively and negatively charged as well as neutral, allowing the detection in different areas of the electropherograms. The background electrolytes were chosen to cover a wide pH range. The compatibility to the separation conditions was dependent on the ligands attached to the Co III centers, with only the acetylacetonato (acac) complex being applicable in the pH range 2.8-9.0. Furthermore, because of being charge neutral, this compound could be used as an electroosmotic flow (EOF) marker. In general, employing Co complexes resulted in improved data sets, particularly with regard to the migration times and peak areas, which resulted, for example, in higher linear ranges for the quantification of cisplatin.

  6. Radiative corrections to the solar lepton mixing sum rule

    NASA Astrophysics Data System (ADS)

    Zhang, Jue; Zhou, Shun

    2016-08-01

    The simple correlation among three lepton flavor mixing angles ( θ 12, θ 13, θ 23) and the leptonic Dirac CP-violating phase δ is conventionally called a sum rule of lepton flavor mixing, which may be derived from a class of neutrino mass models with flavor symmetries. In this paper, we consider the solar lepton mixing sum rule θ 12 ≈ θ 12 ν + θ 13 cos δ, where θ 12 ν stems from a constant mixing pattern in the neutrino sector and takes the value of θ 12 ν = 45 ° for the bi-maximal mixing (BM), {θ}_{12}^{ν } = { tan}^{-1}(1/√{2}) ≈ 35.3° for the tri-bimaximal mixing (TBM) or {θ}_{12}^{ν } = { tan}^{-1}(1/√{5+1}) ≈ 31.7° for the golden-ratio mixing (GR), and investigate the renormalization-group (RG) running effects on lepton flavor mixing parameters when this sum rule is assumed at a superhigh-energy scale. For illustration, we work within the framework of the minimal supersymmetric standard model (MSSM), and implement the Bayesian approach to explore the posterior distribution of δ at the low-energy scale, which becomes quite broad when the RG running effects are significant. Moreover, we also discuss the compatibility of the above three mixing scenarios with current neutrino oscillation data, and observe that radiative corrections can increase such a compatibility for the BM scenario, resulting in a weaker preference for the TBM and GR ones.

  7. Learning graph matching.

    PubMed

    Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J

    2009-06-01

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.

  8. Corra: Computational framework and tools for LC-MS discovery and targeted mass spectrometry-based proteomics

    PubMed Central

    Brusniak, Mi-Youn; Bodenmiller, Bernd; Campbell, David; Cooke, Kelly; Eddes, James; Garbutt, Andrew; Lau, Hollis; Letarte, Simon; Mueller, Lukas N; Sharma, Vagisha; Vitek, Olga; Zhang, Ning; Aebersold, Ruedi; Watts, Julian D

    2008-01-01

    Background Quantitative proteomics holds great promise for identifying proteins that are differentially abundant between populations representing different physiological or disease states. A range of computational tools is now available for both isotopically labeled and label-free liquid chromatography mass spectrometry (LC-MS) based quantitative proteomics. However, they are generally not comparable to each other in terms of functionality, user interfaces, information input/output, and do not readily facilitate appropriate statistical data analysis. These limitations, along with the array of choices, present a daunting prospect for biologists, and other researchers not trained in bioinformatics, who wish to use LC-MS-based quantitative proteomics. Results We have developed Corra, a computational framework and tools for discovery-based LC-MS proteomics. Corra extends and adapts existing algorithms used for LC-MS-based proteomics, and statistical algorithms, originally developed for microarray data analyses, appropriate for LC-MS data analysis. Corra also adapts software engineering technologies (e.g. Google Web Toolkit, distributed processing) so that computationally intense data processing and statistical analyses can run on a remote server, while the user controls and manages the process from their own computer via a simple web interface. Corra also allows the user to output significantly differentially abundant LC-MS-detected peptide features in a form compatible with subsequent sequence identification via tandem mass spectrometry (MS/MS). We present two case studies to illustrate the application of Corra to commonly performed LC-MS-based biological workflows: a pilot biomarker discovery study of glycoproteins isolated from human plasma samples relevant to type 2 diabetes, and a study in yeast to identify in vivo targets of the protein kinase Ark1 via phosphopeptide profiling. Conclusion The Corra computational framework leverages computational innovation to enable biologists or other researchers to process, analyze and visualize LC-MS data with what would otherwise be a complex and not user-friendly suite of tools. Corra enables appropriate statistical analyses, with controlled false-discovery rates, ultimately to inform subsequent targeted identification of differentially abundant peptides by MS/MS. For the user not trained in bioinformatics, Corra represents a complete, customizable, free and open source computational platform enabling LC-MS-based proteomic workflows, and as such, addresses an unmet need in the LC-MS proteomics field. PMID:19087345

  9. Earth observation image data format

    NASA Technical Reports Server (NTRS)

    Sos, J. Y.

    1976-01-01

    A flexible format for computer compatable tape (CCT) containing multispectral earth observation sensor data is described. The driving functions which comprise the data format requirements are summarized and general data format guidelines are discussed.

  10. Multiple running speed signals in medial entorhinal cortex

    PubMed Central

    Hinman, James R.; Brandon, Mark P.; Climer, Jason R.; Chapman, G. William; Hasselmo, Michael E.

    2016-01-01

    Grid cells in medial entorhinal cortex (MEC) can be modeled using oscillatory interference or attractor dynamic mechanisms that perform path integration, a computation requiring information about running direction and speed. The two classes of computational models often use either an oscillatory frequency or a firing rate that increases as a function of running speed. Yet it is currently not known whether these are two manifestations of the same speed signal or dissociable signals with potentially different anatomical substrates. We examined coding of running speed in MEC and identified these two speed signals to be independent of each other within individual neurons. The medial septum (MS) is strongly linked to locomotor behavior and removal of MS input resulted in strengthening of the firing rate speed signal, while decreasing the strength of the oscillatory speed signal. Thus two speed signals are present in MEC that are differentially affected by disrupted MS input. PMID:27427460

  11. Running SINDA '85/FLUINT interactive on the VAX

    NASA Technical Reports Server (NTRS)

    Simmonds, Boris

    1992-01-01

    Computer software as engineering tools are typically run in three modes: Batch, Demand, and Interactive. The first two are the most popular in the SINDA world. The third one is not so popular, due probably to the users inaccessibility to the command procedure files for running SINDA '85, or lack of familiarity with the SINDA '85 execution processes (pre-processor, processor, compilation, linking, execution and all of the file assignment, creation, deletions and de-assignments). Interactive is the mode that makes thermal analysis with SINDA '85 a real-time design tool. This paper explains a command procedure sufficient (the minimum modifications required in an existing demand command procedure) to run SINDA '85 on the VAX in an interactive mode. To exercise the procedure a sample problem is presented exemplifying the mode, plus additional programming capabilities available in SINDA '85. Following the same guidelines the process can be extended to other SINDA '85 residence computer platforms.

  12. Multi-GPGPU Tsunami simulation at Toyama-bay

    NASA Astrophysics Data System (ADS)

    Furuyama, Shoichi; Ueda, Yuki

    2017-07-01

    Accelerated multi General Purpose Graphics Processing Unit (GPGPU) calculation for Tsunami run-up simulation was achieved at the wide area (whole Toyama-bay in Japan) by faster computation technique. Toyama-bay has active-faults at the sea-bed. It has a high possibility to occur earthquakes and Tsunami waves in the case of the huge earthquake, that's why to predict the area of Tsunami run-up is important for decreasing damages to residents by the disaster. However it is very hard task to achieve the simulation by the computer resources problem. A several meter's order of the high resolution calculation is required for the running-up Tsunami simulation because artificial structures on the ground such as roads, buildings, and houses are very small. On the other hand the huge area simulation is also required. In the Toyama-bay case the area is 42 [km] × 15 [km]. When 5 [m] × 5 [m] size computational cells are used for the simulation, over 26,000,000 computational cells are generated. To calculate the simulation, a normal CPU desktop computer took about 10 hours for the calculation. An improvement of calculation time is important problem for the immediate prediction system of Tsunami running-up, as a result it will contribute to protect a lot of residents around the coastal region. The study tried to decrease this calculation time by using multi GPGPU system which is equipped with six NVIDIA TESLA K20xs, InfiniBand network connection between computer nodes by MVAPICH library. As a result 5.16 times faster calculation was achieved on six GPUs than one GPU case and it was 86% parallel efficiency to the linear speed up.

  13. Demonstration and Methodology of Structural Monitoring of Stringer Runs out Composite Areas by Embedded Optical Fiber Sensors and Connectors Integrated during Production in a Composite Plant.

    PubMed

    Miguel Giraldo, Carlos; Zúñiga Sagredo, Juan; Sánchez Gómez, José; Corredera, Pedro

    2017-07-21

    Embedding optical fibers sensors into composite structures for Structural Health Monitoring purposes is not just one of the most attractive solutions contributing to smart structures, but also the optimum integration approach that insures maximum protection and integrity of the fibers. Nevertheless this intended integration level still remains an industrial challenge since today there is no mature integration process in composite plants matching all necessary requirements. This article describes the process developed to integrate optical fiber sensors in the Production cycle of a test specimen. The sensors, Bragg gratings, were integrated into the laminate during automatic tape lay-up and also by a secondary bonding process, both in the Airbus Composite Plant. The test specimen, completely representative of the root joint of the lower wing cover of a real aircraft, is comprised of a structural skin panel with the associated stringer run out. The ingress-egress was achieved through the precise design and integration of miniaturized optical connectors compatible with the manufacturing conditions and operational test requirements. After production, the specimen was trimmed, assembled and bolted to metallic plates to represent the real triform and buttstrap, and eventually installed into the structural test rig. The interrogation of the sensors proves the effectiveness of the integration process; the analysis of the strain results demonstrate the good correlation between fiber sensors and electrical gauges in those locations where they are installed nearby, and the curvature and load transfer analysis in the bolted stringer run out area enable demonstration of the consistency of the fiber sensors measurements. In conclusion, this work presents strong evidence of the performance of embedded optical sensors for structural health monitoring purposes, where in addition and most importantly, the fibers were integrated in a real production environment and the ingress-egress issue was solved by the design and integration of miniaturized connectors compatible with the manufacturing and structural test phases.

  14. Demonstration and Methodology of Structural Monitoring of Stringer Runs out Composite Areas by Embedded Optical Fiber Sensors and Connectors Integrated during Production in a Composite Plant

    PubMed Central

    Miguel Giraldo, Carlos; Zúñiga Sagredo, Juan; Sánchez Gómez, José; Corredera, Pedro

    2017-01-01

    Embedding optical fibers sensors into composite structures for Structural Health Monitoring purposes is not just one of the most attractive solutions contributing to smart structures, but also the optimum integration approach that insures maximum protection and integrity of the fibers. Nevertheless this intended integration level still remains an industrial challenge since today there is no mature integration process in composite plants matching all necessary requirements. This article describes the process developed to integrate optical fiber sensors in the Production cycle of a test specimen. The sensors, Bragg gratings, were integrated into the laminate during automatic tape lay-up and also by a secondary bonding process, both in the Airbus Composite Plant. The test specimen, completely representative of the root joint of the lower wing cover of a real aircraft, is comprised of a structural skin panel with the associated stringer run out. The ingress-egress was achieved through the precise design and integration of miniaturized optical connectors compatible with the manufacturing conditions and operational test requirements. After production, the specimen was trimmed, assembled and bolted to metallic plates to represent the real triform and buttstrap, and eventually installed into the structural test rig. The interrogation of the sensors proves the effectiveness of the integration process; the analysis of the strain results demonstrate the good correlation between fiber sensors and electrical gauges in those locations where they are installed nearby, and the curvature and load transfer analysis in the bolted stringer run out area enable demonstration of the consistency of the fiber sensors measurements. In conclusion, this work presents strong evidence of the performance of embedded optical sensors for structural health monitoring purposes, where in addition and most importantly, the fibers were integrated in a real production environment and the ingress-egress issue was solved by the design and integration of miniaturized connectors compatible with the manufacturing and structural test phases. PMID:28754009

  15. An analysis of running skyline load path.

    Treesearch

    Ward W. Carson; Charles N. Mann

    1971-01-01

    This paper is intended for those who wish to prepare an algorithm to determine the load path of a running skyline. The mathematics of a simplified approach to this running skyline design problem are presented. The approach employs assumptions which reduce the complexity of the problem to the point where it can be solved on desk-top computers of limited capacities. The...

  16. Job Priorities on Peregrine | High-Performance Computing | NREL

    Science.gov Websites

    allocation when run with qos=high. Requesting a Node Reservation If you are doing work that requires real scheduler more efficiently plan resources for larger jobs. When projects reach their allocation limit, jobs associated with those projects will run at very low priority, which will ensure that these jobs run only when

  17. Running High-Throughput Jobs on Peregrine | High-Performance Computing |

    Science.gov Websites

    unique name (using "name=") and usse the task name to create a unique output file name. For runs on and how many tasks to give to each worker at a time using the NITRO_COORD_OPTIONS environment . Finally, you start Nitro by executing launch_nitro.sh. Sample Nitro job script To run a job using the

  18. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    PubMed

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current softwaremore » when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.« less

  20. ASDIR-II. Volume I. User Manual

    DTIC Science & Technology

    1975-12-01

    normally the most significant part of the overall aircraft IR signature. The 4 radiance is directly dependent upon the geometric view factors , a set...tactors as punched card output in. a view factor computer run. For the view factor computer run IB49 through 53 and all IDS input A, from IDS-2 to IDS-6...may be excluded from the input string if the * program execution is requested to stop after punching the viewv factors . Inputs required for punching

  1. Feasibility of Virtual Machine and Cloud Computing Technologies for High Performance Computing

    DTIC Science & Technology

    2014-05-01

    Hat Enterprise Linux SaaS software as a service VM virtual machine vNUMA virtual non-uniform memory access WRF weather research and forecasting...previously mentioned in Chapter I Section B1 of this paper, which is used to run the weather research and forecasting ( WRF ) model in their experiments...against a VMware virtualization solution of WRF . The experiment consisted of running WRF in a standard configuration between the D-VTM and VMware while

  2. The Air Force Geophysics Laboratory Standalone Data Acquisition System: A Functional Description.

    DTIC Science & Technology

    1980-10-09

    the board are a buffer for the RUN/HALT front panel switch and a retriggerable oneshot multivibrator. This latter circuit senses the SRUN pulse train...recording on the data tapes, and providing the master timing source for data acquisition. An Electronic Research Company (ERC) model 2446 digital...the computer is fed to a retriggerable oneshot multivibrator on the board. (SRUN consists of a pulse train that is present when the computer is running

  3. Improved Algorithms Speed It Up for Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazi, A

    2005-09-20

    Huge computers, huge codes, complex problems to solve. The longer it takes to run a code, the more it costs. One way to speed things up and save time and money is through hardware improvements--faster processors, different system designs, bigger computers. But another side of supercomputing can reap savings in time and speed: software improvements to make codes--particularly the mathematical algorithms that form them--run faster and more efficiently. Speed up math? Is that really possible? According to Livermore physicist Eugene Brooks, the answer is a resounding yes. ''Sure, you get great speed-ups by improving hardware,'' says Brooks, the deputy leadermore » for Computational Physics in N Division, which is part of Livermore's Physics and Advanced Technologies (PAT) Directorate. ''But the real bonus comes on the software side, where improvements in software can lead to orders of magnitude improvement in run times.'' Brooks knows whereof he speaks. Working with Laboratory physicist Abraham Szoeke and others, he has been instrumental in devising ways to shrink the running time of what has, historically, been a tough computational nut to crack: radiation transport codes based on the statistical or Monte Carlo method of calculation. And Brooks is not the only one. Others around the Laboratory, including physicists Andrew Williamson, Randolph Hood, and Jeff Grossman, have come up with innovative ways to speed up Monte Carlo calculations using pure mathematics.« less

  4. SARANA: language, compiler and run-time system support for spatially aware and resource-aware mobile computing.

    PubMed

    Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei

    2008-10-28

    Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.

  5. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation

    PubMed Central

    Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho

    2014-01-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299

  6. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    PubMed

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  7. Foreign Language Teaching and the Computer.

    ERIC Educational Resources Information Center

    Garrett, Nina; Hart, Robert S.

    1988-01-01

    A review of the APPLE MACINTOSH-compatible software "Conjugate! Spanish," intended to drill Spanish verb forms, points out its strengths (error feedback, user manual, user interface, and feature control) and its weaknesses (pedagogical approach). (CB)

  8. Software on diffractive optics and computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Doskolovich, Leonid L.; Golub, Michael A.; Kazanskiy, Nikolay L.; Khramov, Alexander G.; Pavelyev, Vladimir S.; Seraphimovich, P. G.; Soifer, Victor A.; Volotovskiy, S. G.

    1995-01-01

    The `Quick-DOE' software for an IBM PC-compatible computer is aimed at calculating the masks of diffractive optical elements (DOEs) and computer generated holograms, computer simulation of DOEs, and for executing a number of auxiliary functions. In particular, among the auxiliary functions are the file format conversions, mask visualization on display from a file, implementation of fast Fourier transforms, and arranging and preparation of composite images for the output on a photoplotter. The software is aimed for use by opticians, DOE designers, and the programmers dealing with the development of the program for DOE computation.

  9. The engineering design integration (EDIN) system. [digital computer program complex

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  10. COED Transactions, Vol. XI, No. 2, February 1979. A Student Designed Microcomputer Based Data Acquisition System.

    ERIC Educational Resources Information Center

    Mitchell, Eugene E., Ed.

    In context of an instrumentation course, four ocean engineering students set out to design and construct a micro-computer based data acquisition system that would be compatible with the University's CYBER host computer. The project included hardware design in the area of sampling, analog-to-digital conversion and timing coordination. It also…

  11. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  12. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  13. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  14. JPRS Report Science & Technology Europe.

    DTIC Science & Technology

    1992-10-22

    Potatoes for More Sugar [Frankfurt/Main FRANKFURTER ALLEGEMEINE, 12 Aug 92] 26 COMPUTERS French Devise Operating System for Parallel, Failure...Tolerant and Real-Time Systems [Munich COMPUTER WOCHE, 5 Jun 92] 27 Germany Markets External Mass Memory for IBM-Compatible Parallel Interfaces...Infrared Detection System [Thierry Lucas; Paris L’USINE NOUVELLE TECHNOLOGIES, 16 Jul 92] 28 Streamlined ACE Fighter Airplane Approved [Paris AFP

  15. Linear and passive silicon diodes, isolators, and logic gates

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Yuan

    2013-12-01

    Silicon photonic integrated devices and circuits have offered a promising means to revolutionalize information processing and computing technologies. One important reason is that these devices are compatible with conventional complementary metal oxide semiconductor (CMOS) processing technology that overwhelms current microelectronics industry. Yet, the dream to build optical computers has yet to come without the breakthrough of several key elements including optical diodes, isolators, and logic gates with low power, high signal contrast, and large bandwidth. Photonic crystal has a great power to mold the flow of light in micrometer/nanometer scale and is a promising platform for optical integration. In this paper we present our recent efforts of design, fabrication, and characterization of ultracompact, linear, passive on-chip optical diodes, isolators and logic gates based on silicon two-dimensional photonic crystal slabs. Both simulation and experiment results show high performance of these novel designed devices. These linear and passive silicon devices have the unique properties of small fingerprint, low power request, large bandwidth, fast response speed, easy for fabrication, and being compatible with COMS technology. Further improving their performance would open up a road towards photonic logics and optical computing and help to construct nanophotonic on-chip processor architectures for future optical computers.

  16. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  17. Using Avizo Software on the Peregrine System | High-Performance Computing |

    Science.gov Websites

    be run remotely from the Peregrine visualization node. First, launch a TurboVNC remote desktop. Then from a terminal in that remote desktop: % module load avizo % vglrun avizo Running Locally Avizo can

  18. A Computing Infrastructure for Supporting Climate Studies

    NASA Astrophysics Data System (ADS)

    Yang, C.; Bambacus, M.; Freeman, S. M.; Huang, Q.; Li, J.; Sun, M.; Xu, C.; Wojcik, G. S.; Cahalan, R. F.; NASA Climate @ Home Project Team

    2011-12-01

    Climate change is one of the major challenges facing us on the Earth planet in the 21st century. Scientists build many models to simulate the past and predict the climate change for the next decades or century. Most of the models are at a low resolution with some targeting high resolution in linkage to practical climate change preparedness. To calibrate and validate the models, millions of model runs are needed to find the best simulation and configuration. This paper introduces the NASA effort on Climate@Home project to build a supercomputer based-on advanced computing technologies, such as cloud computing, grid computing, and others. Climate@Home computing infrastructure includes several aspects: 1) a cloud computing platform is utilized to manage the potential spike access to the centralized components, such as grid computing server for dispatching and collecting models runs results; 2) a grid computing engine is developed based on MapReduce to dispatch models, model configuration, and collect simulation results and contributing statistics; 3) a portal serves as the entry point for the project to provide the management, sharing, and data exploration for end users; 4) scientists can access customized tools to configure model runs and visualize model results; 5) the public can access twitter and facebook to get the latest about the project. This paper will introduce the latest progress of the project and demonstrate the operational system during the AGU fall meeting. It will also discuss how this technology can become a trailblazer for other climate studies and relevant sciences. It will share how the challenges in computation and software integration were solved.

  19. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  20. Darkessence

    NASA Astrophysics Data System (ADS)

    Gu, Je-An

    2014-01-01

    Darkessence, the dark source of anti-gravity and that of attractive gravity, serves as the largest testing ground of the interplay between quantum matter and classical gravity. We expect it to shed light on the conflict between quantum physics and gravity, the most important puzzle in fundamental physics in the 21st century. In this paper we attempt to reveal the guidelines hinted by darkessence for clarifying or even resolving the conflict. To this aim, we question (1) the compatibility of the renormalization-group (RG) running with the energy conservation, (2) the effectiveness of an effective action in quantum field theory for describing the gravitation of quantum matter, and (3) the way quantum vacuum energy gravitates. These doubts illustrate the conflict and suggest several guidelines on the resolution: the preservation of the energy conservation and the equivalence principle (or its variant) under RG running, and a natural relief of the vacuum energy catastrophe.

  1. ROBucket: A low cost operant chamber based on the Arduino microcontroller.

    PubMed

    Devarakonda, Kavya; Nguyen, Katrina P; Kravitz, Alexxai V

    2016-06-01

    The operant conditioning chamber is a cornerstone of animal behavioral research. Operant boxes are used to assess learning and motivational behavior in animals, particularly for food and drug reinforcers. However, commercial operant chambers cost several thousands of dollars. We have constructed the Rodent Operant Bucket (ROBucket), an inexpensive and easily assembled open-source operant chamber based on the Arduino microcontroller platform, which can be used to train mice to respond for sucrose solution or other liquid reinforcers. The apparatus contains two nose pokes, a drinking well, and a solenoid-controlled liquid delivery system. ROBucket can run fixed ratio and progressive ratio training schedules, and can be programmed to run more complicated behavioral paradigms. Additional features such as motion sensing and video tracking can be added to the operant chamber through the array of widely available Arduino-compatible sensors. The design files and programming code are open source and available online for others to use.

  2. Shared address collectives using counter mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blocksome, Michael; Dozsa, Gabor; Gooding, Thomas M

    A shared address space on a compute node stores data received from a network and data to transmit to the network. The shared address space includes an application buffer that can be directly operated upon by a plurality of processes, for instance, running on different cores on the compute node. A shared counter is used for one or more of signaling arrival of the data across the plurality of processes running on the compute node, signaling completion of an operation performed by one or more of the plurality of processes, obtaining reservation slots by one or more of the pluralitymore » of processes, or combinations thereof.« less

  3. Crew appliance computer program manual, volume 1

    NASA Technical Reports Server (NTRS)

    Russell, D. J.

    1975-01-01

    Trade studies of numerous appliance concepts for advanced spacecraft galley, personal hygiene, housekeeping, and other areas were made to determine which best satisfy the space shuttle orbiter and modular space station mission requirements. Analytical models of selected appliance concepts not currently included in the G-189A Generalized Environmental/Thermal Control and Life Support Systems (ETCLSS) Computer Program subroutine library were developed. The new appliance subroutines are given along with complete analytical model descriptions, solution methods, user's input instructions, and validation run results. The appliance components modeled were integrated with G-189A ETCLSS models for shuttle orbiter and modular space station, and results from computer runs of these systems are presented.

  4. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  5. PHREEQCI; a graphical user interface for the geochemical computer program PHREEQC

    USGS Publications Warehouse

    Charlton, Scott R.; Macklin, Clifford L.; Parkhurst, David L.

    1997-01-01

    PhreeqcI is a Windows-based graphical user interface for the geochemical computer program PHREEQC. PhreeqcI provides the capability to generate and edit input data files, run simulations, and view text files containing simulation results, all within the framework of a single interface. PHREEQC is a multipurpose geochemical program that can perform speciation, inverse, reaction-path, and 1D advective reaction-transport modeling. Interactive access to all of the capabilities of PHREEQC is available with PhreeqcI. The interface is written in Visual Basic and will run on personal computers under the Windows(3.1), Windows95, and WindowsNT operating systems.

  6. Local rollback for fault-tolerance in parallel computing systems

    DOEpatents

    Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  7. EMC analysis of MOS-1

    NASA Astrophysics Data System (ADS)

    Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.

    The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.

  8. [Groupamatic 360 C1 and automated blood donor processing in a transfusion center].

    PubMed

    Guimbretiere, J; Toscer, M; Harousseau, H

    1978-03-01

    Automation of donor management flow path is controlled by: --a 3 slip "port a punch" card, --the groupamatic unit with a result sorted out on punch paper tape, --the management computer off line connected to groupamatic. Data tracking at blood collection time is made by punching a card with the donor card used as a master card. Groupamatic performs: --a standard blood grouping with one run for registered donors and two runs for new donors, --a phenotyping with two runs, --a screening of irregular antibodies. Themanagement computer checks the correlation between the data of the two runs or the data of a single run and that of previous file. It updates the data resident in the central file and prints out: --the controls of the different blood group for the red cell panel, --The listing of error messages, --The listing of emergency call up, --The listing of collected blood units when arrived at the blood center, with quantitative and qualitative information such as: number of blood, units collected, donor addresses, etc., --Statistics, --Donor cards, --Diplomas.

  9. Abusive User Policy | High-Performance Computing | NREL

    Science.gov Websites

    below. First Incident The user's ability to run new jobs or store new data will be suspended temporarily acknowledged and participated in a remedy, ability to run new jobs or store new data will be restored. Second Incident Suspend running new jobs or storing new data. Terminate jobs if necessary. The system and

  10. PNNL streamlines energy-guzzling computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, Mary T.; Marquez, Andres

    In a room the size of a garage, two rows of six-foot-tall racks holding supercomputer hard drives sit back-to-back. Thin tubes and wires snake off the hard drives, slithering into the corners. Stepping between the rows, a rush of heat whips around you -- the air from fans blowing off processing heat. But walk farther in, between the next racks of hard drives, and the temperature drops noticeably. These drives are being cooled by a non-conducting liquid that runs right over the hardworking processors. The liquid carries the heat away in tubes, saving the air a few degrees. This ismore » the Energy Smart Data Center at Pacific Northwest National Laboratory. The bigger, faster, and meatier supercomputers get, the more energy they consume. PNNL's Andres Marquez has developed this test bed to learn how to train the behemoths in energy efficiency. The work will help supercomputers perform better as well. Processors have to keep cool or suffer from "thermal throttling," says Marquez. "That's the performance threshold where the computer is too hot to run well. That threshold is an industry secret." The center at EMSL, DOE's national scientific user facility at PNNL, harbors several ways of experimenting with energy usage. For example, the room's air conditioning is isolated from the rest of EMSL -- pipes running beneath the floor carry temperature-controlled water through heat exchangers to cooling towers outside. "We can test whether it's more energy efficient to cool directly on the processing chips or out in the water tower," says Marquez. The hard drives feed energy and temperature data to a network server running specially designed software that controls and monitors the data center. To test the center’s limits, the team runs the processors flat out – not only on carefully controlled test programs in the Energy Smart computers, but also on real world software from other EMSL research, such as regional weather forecasting models. Marquez's group is also developing "power aware computing", where the computer programs themselves perform calculations more energy efficiently. Maybe once computers get smart about energy, they'll have tips for their users.« less

  11. A Menu-Driven Interface to Unix-Based Resources

    PubMed Central

    Evans, Elizabeth A.

    1989-01-01

    Unix has often been overlooked in the past as a viable operating system for anyone other than computer scientists. Its terseness, non-mnemonic nature of the commands, and the lack of user-friendly software to run under it are but a few of the user-related reasons which have been cited. It is, nevertheless, the operating system of choice in many cases. This paper describes a menu-driven interface to Unix which provides user-friendlier access to the software resources available on the computers running under Unix.

  12. The Impact of Typhoons on the Ocean in the Pacific (ITOP) Field and Data Management Support

    DTIC Science & Technology

    2011-12-16

    in October o f 2009 to develop effective sampling strategies for 20 I 0. EOL /Computing Data and Software Facil ity (CDS) supported the !TO P Dry Run...measurement strategies necessitated a dry run experiment in October of 2009 to develop effective sampling strategies for 2010. EOL /Computing Data and...contains products from 21 September through 32 October 2009. The catalog remains accessible at EOL at the above mentioned uri. The products listed by

  13. Building Computer-Based Experiments in Psychology without Programming Skills.

    PubMed

    Ruisoto, Pablo; Bellido, Alberto; Ruiz, Javier; Juanes, Juan A

    2016-06-01

    Research in Psychology usually requires to build and run experiments. However, although this task has required scripting, recent computer tools based on graphical interfaces offer new opportunities in this field for researchers with non-programming skills. The purpose of this study is to illustrate and provide a comparative overview of two of the main free open source "point and click" software packages for building and running experiments in Psychology: PsychoPy and OpenSesame. Recommendations for their potential use are further discussed.

  14. Interoperability...NMCI and Beyond

    DTIC Science & Technology

    2001-05-31

    wireless. “On The Road” – Pagers – Cell phones – Palm-size PDAs – Two way pagers – Hand-held computing device – Laptop computer – Two-way radios – A...combat capability”… $0 $5 $10 $15 $20 $25 Electric Power NMCI Seat First Run Movie Cell Phone Fed. Civilian Salary 23.80 11.00 4.00 1.380.20 F/A-18...Flying Hour: 1,134.00 Fed. Civilian Salary (mean): 23.80 Cell Phone Air Time: 11.00 First Run Movie: 4.00 DSN

  15. A Rutherford Scattering Simulation with Microcomputer Graphics.

    ERIC Educational Resources Information Center

    Calle, Carlos I.; Wright, Lavonia F.

    1989-01-01

    Lists a program for a simulation of Rutherford's gold foil experiment in BASIC for both Apple II and IBM compatible computers. Compares Rutherford's model of the atom with Thompson's plum pudding model of the atom. (MVL)

  16. The kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats: calibration to the indirect immunofluorescence assay and computerized standardization of results through normalization to control values.

    PubMed Central

    Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W

    1987-01-01

    The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results. PMID:3032390

  17. The kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats: calibration to the indirect immunofluorescence assay and computerized standardization of results through normalization to control values.

    PubMed

    Barlough, J E; Jacobson, R H; Downing, D R; Lynch, T J; Scott, F W

    1987-01-01

    The computer-assisted, kinetics-based enzyme-linked immunosorbent assay for coronavirus antibodies in cats was calibrated to the conventional indirect immunofluorescence assay by linear regression analysis and computerized interpolation (generation of "immunofluorescence assay-equivalent" titers). Procedures were developed for normalization and standardization of kinetics-based enzyme-linked immunosorbent assay results through incorporation of five different control sera of predetermined ("expected") titer in daily runs. When used with such sera and with computer assistance, the kinetics-based enzyme-linked immunosorbent assay minimized both within-run and between-run variability while allowing also for efficient data reduction and statistical analysis and reporting of results.

  18. International Aerospace and Ground Conference on Lightning and Static Electricity (15th) Held in Atlantic City, New Jersey on October 6 - 8, 1992. Addendum

    DTIC Science & Technology

    1992-11-01

    November 1992 1992 INTERNATIONAL AEROSPACE AND GROUND CONFERENCE 6. Perfrming Orgnis.aten Code ON LIGHTNING AND STATIC ELECTRICITY - ADDENDUM 111...October 6-8 1992 Program and the Federal Aviation Administration 14. Sponsoring Agency Code Technical Center ACD-230 15. Supplementary Metes The NICG...area]. The program runs well on an IBM PC or compatible 386 with a math co-processor 387 chip and a VGA monitor. For this study, streamers were added

  19. Generically Used Expert Scheduling System (GUESS): User's Guide Version 1.0

    NASA Technical Reports Server (NTRS)

    Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira

    1996-01-01

    This user's guide contains instructions explaining how to best operate the program GUESS, a generic expert scheduling system. GUESS incorporates several important features for a generic scheduler, including automatic scheduling routines to generate a 'first' schedule for the user, a user interface that includes Gantt charts and enables the human scheduler to manipulate schedules manually, diagnostic report generators, and a variety of scheduling techniques. The current version of GUESS runs on an IBM PC or compatible in the Windows 3.1 or Windows '95 environment.

  20. Basic EMC (Electromagnetic Compatibility) Technology Advancement for C3 Systems, Macromodeling of Digital Circuits. Volume 2B.

    DTIC Science & Technology

    1984-04-01

    dependent source to multiply a ramp by a unit sinusoid. Using this method, data could be extracted from a single run to show the magnitude of the...IHPUT VOLTAGE SOURCES VAC 1 0 0V VAl 2 0 0V VA2 3 0 0V VA3 4 0 PULSE(OV 3V ONSD 2NSR 2NSF 198NSPW) VB0 5 0 OV VB1 6 0 OV VB2 7 0 CV VB3 8 0 PULSE

Top