Another Program For Generating Interactive Graphics
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S
Program For Generating Interactive Displays
NASA Technical Reports Server (NTRS)
Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl;
1991-01-01
Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute
WinHPC System Programming | High-Performance Computing | NREL
Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running
Analyzing Spacecraft Telecommunication Systems
NASA Technical Reports Server (NTRS)
Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric
2004-01-01
Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.
NASA Technical Reports Server (NTRS)
Yang, Guowei; Pasareanu, Corina S.; Khurshid, Sarfraz
2012-01-01
This paper introduces memoized symbolic execution (Memoise), a novel approach for more efficient application of forward symbolic execution, which is a well-studied technique for systematic exploration of program behaviors based on bounded execution paths. Our key insight is that application of symbolic execution often requires several successive runs of the technique on largely similar underlying problems, e.g., running it once to check a program to find a bug, fixing the bug, and running it again to check the modified program. Memoise introduces a trie-based data structure that stores the key elements of a run of symbolic execution. Maintenance of the trie during successive runs allows re-use of previously computed results of symbolic execution without the need for re-computing them as is traditionally done. Experiments using our prototype embodiment of Memoise show the benefits it holds in various standard scenarios of using symbolic execution, e.g., with iterative deepening of exploration depth, to perform regression analysis, or to enhance coverage.
BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
2015-06-02
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
Creating Web-Based Scientific Applications Using Java Servlets
NASA Technical Reports Server (NTRS)
Palmer, Grant; Arnold, James O. (Technical Monitor)
2001-01-01
There are many advantages to developing web-based scientific applications. Any number of people can access the application concurrently. The application can be accessed from a remote location. The application becomes essentially platform-independent because it can be run from any machine that has internet access and can run a web browser. Maintenance and upgrades to the application are simplified since only one copy of the application exists in a centralized location. This paper details the creation of web-based applications using Java servlets. Java is a powerful, versatile programming language that is well suited to developing web-based programs. A Java servlet provides the interface between the central server and the remote client machines. The servlet accepts input data from the client, runs the application on the server, and sends the output back to the client machine. The type of servlet that supports the HTTP protocol will be discussed in depth. Among the topics the paper will discuss are how to write an http servlet, how the servlet can run applications written in Java and other languages, and how to set up a Java web server. The entire process will be demonstrated by building a web-based application to compute stagnation point heat transfer.
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.
1995-01-01
The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
Web Program for Development of GUIs for Cluster Computers
NASA Technical Reports Server (NTRS)
Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward
2003-01-01
WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.
The Bittersweet Task of Running a Grant Program
ERIC Educational Resources Information Center
Markin, Karen M.
2013-01-01
Running a grant program for the first time can feel overwhelming. The work is time-consuming, requires attention to many details, and is accompanied by pressure from applicants who are desperate for money and prompt decisions. This article presents a list of all of the factors educators have to consider. From establishing a timeline and drafting…
Willy, Richard W
2018-01-01
Running-related injuries are common and are associated with a high rate of reoccurrence. Biomechanics and errors in applied training loads are often cited as causes of running-related injuries. Clinicians and runners are beginning to utilize wearable technologies to quantify biomechanics and training loads with the hope of reducing the incidence of running-related injuries. Wearable devices can objectively assess biomechanics and training loads in runners, yet guidelines for their use by clinicians and runners are not currently available. This article outlines several applications for the use of wearable devices in the prevention and rehabilitation of running-related injuries. Applications for monitoring of training loads, running biomechanics, running epidemiology, return to running programs and gait retraining are discussed. Best-practices for choosing and use of wearables are described to provide guidelines for clinicians and runners. Finally, future applications are outlined for this rapidly developing field. Copyright © 2017 Elsevier Ltd. All rights reserved.
Library-Specific Microcomputer Software.
ERIC Educational Resources Information Center
Levert, Virginia M.
1985-01-01
Discusses number and type of microcomputer software programs useful to libraries and types of hardware on which they run, as identified by Nolan Information Management Services. Highlights include general application programs, applications designed to support library technical processes, producers of library software, and choosing among options.…
The X-ray system of crystallographic programs for any computer having a PIDGIN FORTRAN compiler
NASA Technical Reports Server (NTRS)
Stewart, J. M.; Kruger, G. J.; Ammon, H. L.; Dickinson, C.; Hall, S. R.
1972-01-01
A manual is presented for the use of a library of crystallographic programs. This library, called the X-ray system, is designed to carry out the calculations required to solve the structure of crystals by diffraction techniques. It has been implemented at the University of Maryland on the Univac 1108. It has, however, been developed and run on a variety of machines under various operating systems. It is considered to be an essentially machine independent library of applications programs. The report includes definition of crystallographic computing terms, program descriptions, with some text to show their application to specific crystal problems, detailed card input descriptions, mass storage file structure and some example run streams.
Transportable Applications Environment Plus, Version 5.1
NASA Technical Reports Server (NTRS)
1994-01-01
Transportable Applications Environment Plus (TAE+) computer program providing integrated, portable programming environment for developing and running application programs based on interactive windows, text, and graphical objects. Enables both programmers and nonprogrammers to construct own custom application interfaces easily and to move interfaces and application programs to different computers. Used to define corporate user interface, with noticeable improvements in application developer's and end user's learning curves. Main components are; WorkBench, What You See Is What You Get (WYSIWYG) software tool for design and layout of user interface; and WPT (Window Programming Tools) Package, set of callable subroutines controlling user interface of application program. WorkBench and WPT's written in C++, and remaining code written in C.
DOT National Transportation Integrated Search
1994-10-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING LVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES.
Hari, Pradip; Ko, Kevin; Koukoumidis, Emmanouil; Kremer, Ulrich; Martonosi, Margaret; Ottoni, Desiree; Peh, Li-Shiuan; Zhang, Pei
2008-10-28
Increasingly, spatial awareness plays a central role in many distributed and mobile computing applications. Spatially aware applications rely on information about the geographical position of compute devices and their supported services in order to support novel functionality. While many spatial application drivers already exist in mobile and distributed computing, very little systems research has explored how best to program these applications, to express their spatial and temporal constraints, and to allow efficient implementations on highly dynamic real-world platforms. This paper proposes the SARANA system architecture, which includes language and run-time system support for spatially aware and resource-aware applications. SARANA allows users to express spatial regions of interest, as well as trade-offs between quality of result (QoR), latency and cost. The goal is to produce applications that use resources efficiently and that can be run on diverse resource-constrained platforms ranging from laptops to personal digital assistants and to smart phones. SARANA's run-time system manages QoR and cost trade-offs dynamically by tracking resource availability and locations, brokering usage/pricing agreements and migrating programs to nodes accordingly. A resource cost model permeates the SARANA system layers, permitting users to express their resource needs and QoR expectations in units that make sense to them. Although we are still early in the system development, initial versions have been demonstrated on a nine-node system prototype.
Fenix, A Fault Tolerant Programming Framework for MPI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamel, Marc; Teranihi, Keita; Valenzuela, Eric
2016-10-05
Fenix provides APIs to allow the users to add fault tolerance capability to MPI-based parallel programs in a transparent manner. Fenix-enabled programs can run through process failures during program execution using a pool of spare processes accommodated by Fenix.
NASA Technical Reports Server (NTRS)
1972-01-01
The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.
DOT National Transportation Integrated Search
1995-09-05
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. : This report documents the RORSIM comput...
DOT National Transportation Integrated Search
1995-08-01
INTELLIGENT VEHICLE INITIATIVE OR IVI : THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. :...
Run-Off-Road Collision Avoidance Countermeasures Using IVHS Countermeasures: Task 3, Volume 1
DOT National Transportation Integrated Search
1995-08-23
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity oi these crashes. This report describes the findings of the...
Run-Off-Road Collision Avoidance Countermeasures Using IVHS Countermeasures Task 3 - Volume 2
DOT National Transportation Integrated Search
1995-08-23
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. : This report describes the findings of t...
Multitasking kernel for the C and Fortran programming languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, E.D. III
1984-09-01
A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.
DOT National Transportation Integrated Search
1994-10-28
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. This report describes and documents the a...
DOT National Transportation Integrated Search
1994-10-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. : THIS REPORT DESCRIBES AND DOCUMENTS ...
DOT National Transportation Integrated Search
1995-06-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. : THIS REPORT DESCRIBES AND DOCUMENTS ...
DOT National Transportation Integrated Search
1995-09-01
THE RUN-OFF-ROAD COLLISION AVOIDANCE USING IVHS COUNTERMEASURES PROGRAM IS TO ADDRESS THE SINGLE VEHICLE CRASH PROBLEM THROUGH APPLICATION OF TECHNOLOGY TO PREVENT AND/OR REDUCE THE SEVERITY OF THESE CRASHES. : THIS REPORT DOCUMENTS THE RORSIM COM...
DOT National Transportation Integrated Search
1994-10-28
The Run-Off-Road Collision Avoidance Using IVHS Countermeasures program is to address the single vehicle crash problem through application of technology to prevent and/or reduce the severity of these crashes. This report contains a summary of data us...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-19
... with the collection results from a program change to run this one-time college savings account...; Application for Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) College Savings... pairing federally supported college savings accounts with GEAR UP activities as part of an overall college...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dritz, K.W.; Boyle, J.M.
This paper addresses the problem of measuring and analyzing the performance of fine-grained parallel programs running on shared-memory multiprocessors. Such processors use locking (either directly in the application program, or indirectly in a subroutine library or the operating system) to serialize accesses to global variables. Given sufficiently high rates of locking, the chief factor preventing linear speedup (besides lack of adequate inherent parallelism in the application) is lock contention - the blocking of processes that are trying to acquire a lock currently held by another process. We show how a high-resolution, low-overhead clock may be used to measure both lockmore » contention and lack of parallel work. Several ways of presenting the results are covered, culminating in a method for calculating, in a single multiprocessing run, both the speedup actually achieved and the speedup lost to contention for each lock and to lack of parallel work. The speedup losses are reported in the same units, ''processor-equivalents,'' as the speedup achieved. Both are obtained without having to perform the usual one-process comparison run. We chronicle also a variety of experiments motivated by actual results obtained with our measurement method. The insights into program performance that we gained from these experiments helped us to refine the parts of our programs concerned with communication and synchronization. Ultimately these improvements reduced lock contention to a negligible amount and yielded nearly linear speedup in applications not limited by lack of parallel work. We describe two generally applicable strategies (''code motion out of critical regions'' and ''critical-region fissioning'') for reducing lock contention and one (''lock/variable fusion'') applicable only on certain architectures.« less
NASA Technical Reports Server (NTRS)
Chawner, David M.; Gomez, Ray J.
2010-01-01
In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.
NASA Technical Reports Server (NTRS)
Hockney, George; Lee, Seungwon
2008-01-01
A computer program known as PyPele, originally written as a Pythonlanguage extension module of a C++ language program, has been rewritten in pure Python language. The original version of PyPele dispatches and coordinates parallel-processing tasks on cluster computers and provides a conceptual framework for spacecraft-mission- design and -analysis software tools to run in an embarrassingly parallel mode. The original version of PyPele uses SSH (Secure Shell a set of standards and an associated network protocol for establishing a secure channel between a local and a remote computer) to coordinate parallel processing. Instead of SSH, the present Python version of PyPele uses Message Passing Interface (MPI) [an unofficial de-facto standard language-independent application programming interface for message- passing on a parallel computer] while keeping the same user interface. The use of MPI instead of SSH and the preservation of the original PyPele user interface make it possible for parallel application programs written previously for the original version of PyPele to run on MPI-based cluster computers. As a result, engineers using the previously written application programs can take advantage of embarrassing parallelism without need to rewrite those programs.
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui
2012-01-01
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui.
Newton, Richard; Deonarine, Andrew; Wernisch, Lorenz
2012-09-24
The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications.
Nadkarni, P M; Miller, P L
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations.
The engineering design integration (EDIN) system. [digital computer program complex
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.
1974-01-01
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.
Program Processes Thermocouple Readings
NASA Technical Reports Server (NTRS)
Quave, Christine A.; Nail, William, III
1995-01-01
Digital Signal Processor for Thermocouples (DART) computer program implements precise and fast method of converting voltage to temperature for large-temperature-range thermocouple applications. Written using LabVIEW software. DART available only as object code for use on Macintosh II FX or higher-series computers running System 7.0 or later and IBM PC-series and compatible computers running Microsoft Windows 3.1. Macintosh version of DART (SSC-00032) requires LabVIEW 2.2.1 or 3.0 for execution. IBM PC version (SSC-00031) requires LabVIEW 3.0 for Windows 3.1. LabVIEW software product of National Instruments and not included with program.
Checkpointing Shared Memory Programs at the Application-level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Schulz, M; Szwed, P
2004-09-08
Trends in high-performance computing are making it necessary for long-running applications to tolerate hardware faults. The most commonly used approach is checkpoint and restart(CPR)-the state of the computation is saved periodically on disk, and when a failure occurs, the computation is restarted from the last saved state. At present, it is the responsibility of the programmer to instrument applications for CPR. Our group is investigating the use of compiler technology to instrument codes to make them self-checkpointing and self-restarting, thereby providing an automatic solution to the problem of making long-running scientific applications resilient to hardware faults. Our previous work focusedmore » on message-passing programs. In this paper, we describe such a system for shared-memory programs running on symmetric multiprocessors. The system has two components: (i)a pre-compiler for source-to-source modification of applications, and (ii) a runtime system that implements a protocol for coordinating CPR among the threads of the parallel application. For the sake of concreteness, we focus on a non-trivial subset of OpenMP that includes barriers and locks. One of the advantages of this approach is that the ability to tolerate faults becomes embedded within the application itself, so applications become self-checkpointing and self-restarting on any platform. We demonstrate this by showing that our transformed benchmarks can checkpoint and restart on three different platforms (Windows/x86, Linux/x86, and Tru64/Alpha). Our experiments show that the overhead introduced by this approach is usually quite small; they also suggest ways in which the current implementation can be tuned to reduced overheads further.« less
MONO FOR CROSS-PLATFORM CONTROL SYSTEM ENVIRONMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Hiroshi; Timossi, Chris
2006-10-19
Mono is an independent implementation of the .NET Frameworkby Novell that runs on multiple operating systems (including Windows,Linux and Macintosh) and allows any .NET compatible application to rununmodified. For instance Mono can run programs with graphical userinterfaces (GUI) developed with the C# language on Windows with VisualStudio (a full port of WinForm for Mono is in progress). We present theresults of tests we performed to evaluate the portability of our controlssystem .NET applications from MS Windows to Linux.
Network support for system initiated checkpoints
Chen, Dong; Heidelberger, Philip
2013-01-29
A system, method and computer program product for supporting system initiated checkpoints in parallel computing systems. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (HP9000 SERIES 300/400 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is expected to be available on media suitable for seven different machine platforms: 1) DEC VAX computers running VMS (TK50 cartridge in VAX BACKUP format), 2) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and 7) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2.
Optimizing Mars Airplane Trajectory with the Application Navigation System
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Riley, Derek
2004-01-01
Planning complex missions requires a number of programs to be executed in concert. The Application Navigation System (ANS), developed in the NAS Division, can execute many interdependent programs in a distributed environment. We show that the ANS simplifies user effort and reduces time in optimization of the trajectory of a martian airplane. We use a software package, Cart3D, to evaluate trajectories and a shortest path algorithm to determine the optimal trajectory. ANS employs the GridScape to represent the dynamic state of the available computer resources. Then, ANS uses a scheduler to dynamically assign ready task to machine resources and the GridScape for tracking available resources and forecasting completion time of running tasks. We demonstrate system capability to schedule and run the trajectory optimization application with efficiency exceeding 60% on 64 processors.
NASA Technical Reports Server (NTRS)
1995-01-01
Through the Earth Observation Commercial Applications Program (EOCAP) at Stennis Space Center, Applied Analysis, Inc. developed a new tool for analyzing remotely sensed data. The Applied Analysis Spectral Analytical Process (AASAP) detects or classifies objects smaller than a pixel and removes the background. This significantly enhances the discrimination among surface features in imagery. ERDAS, Inc. offers the system as a modular addition to its ERDAS IMAGINE software package for remote sensing applications. EOCAP is a government/industry cooperative program designed to encourage commercial applications of remote sensing. Projects can run three years or more and funding is shared by NASA and the private sector participant. Through the Earth Observation Commercial Applications Program (EOCAP), Ocean and Coastal Environmental Sensing (OCENS) developed SeaStation for marine users. SeaStation is a low-cost, portable, shipboard satellite groundstation integrated with vessel catch and product monitoring software. Linked to the Global Positioning System, SeaStation provides real time relationships between vessel position and data such as sea surface temperature, weather conditions and ice edge location. This allows the user to increase fishing productivity and improve vessel safety. EOCAP is a government/industry cooperative program designed to encourage commercial applications of remote sensing. Projects can run three years or more and funding is shared by NASA and the private sector participant.
A Linguistic Model in Component Oriented Programming
NASA Astrophysics Data System (ADS)
Crăciunean, Daniel Cristian; Crăciunean, Vasile
2016-12-01
It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjusic, Tommy; Kartsaklis, Christos
Application analysis is facilitated through a number of program profiling tools. The tools vary in their complexity, ease of deployment, design, and profiling detail. Specifically, understand- ing, analyzing, and optimizing is of particular importance for scientific applications where minor changes in code paths and data-structure layout can have profound effects. Understanding how intricate data-structures are accessed and how a given memory system responds is a complex task. In this paper we describe a trace profiling tool, Glprof, specifically aimed to lessen the burden of the programmer to pin-point heavily involved data-structures during an application's run-time, and understand data-structure run-time usage.more » Moreover, we showcase the tool's modularity using additional cache simulation components. We elaborate on the tool's design, and features. Finally we demonstrate the application of our tool in the context of Spec bench- marks using the Glprof profiler and two concurrently running cache simulators, PPC440 and AMD Interlagos.« less
Nadkarni, P. M.; Miller, P. L.
1991-01-01
A parallel program for inter-database sequence comparison was developed on the Intel Hypercube using two models of parallel programming. One version was built using machine-specific Hypercube parallel programming commands. The other version was built using Linda, a machine-independent parallel programming language. The two versions of the program provide a case study comparing these two approaches to parallelization in an important biological application area. Benchmark tests with both programs gave comparable results with a small number of processors. As the number of processors was increased, the Linda version was somewhat less efficient. The Linda version was also run without change on Network Linda, a virtual parallel machine running on a network of desktop workstations. PMID:1807632
ERIC Educational Resources Information Center
Schultz, Gary D.
The design and operation of a time-sharing monitor are described. It runs under OS/360 MVT that supports multiple application program interaction with operators of CRT (cathode ray tube) display stations and of a teletype. Key design features discussed include: 1) an interface allowing application programs to be coded in either PL/I or assembler…
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (HP9000 SERIES 700/800 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (IBM RS/6000 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION WITH MOTIF)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SILICON GRAPHICS VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (SUN4 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
TAE+ 5.2 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.2 (DEC RISC ULTRIX VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. User interface interactive objects include data-driven graphical objects such as dials, thermometers, and strip charts as well as menubars, option menus, file selection items, message items, push buttons, and color loggers. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, C++, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides a means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System and the Open Software Foundation's Motif. The HP 9000 Series 700/800 version of TAE 5.2 requires Version 11 Release 5 of the X Window System. All other machine versions of TAE 5.2 require Version 11, Release 4 of the X Window System. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus was developed in 1989 and version 5.2 was released in 1993. TAE Plus 5.2 is available on media suitable for five different machine platforms: (1) IBM RS/6000 series workstations running AIX (.25 inch tape cartridge in UNIX tar format), (2) DEC RISC workstations running ULTRIX (TK50 cartridge in UNIX tar format), (3) HP9000 Series 700/800 computers running HP-UX 9.x and X11/R5 (HP 4mm DDS DAT tape cartridge in UNIX tar format), (4) Sun4 (SPARC) series computers running SunOS (.25 inch tape cartridge in UNIX tar format), and (5) SGI Indigo computers running IRIX (.25 inch IRIS tape cartridge in UNIX tar format). Please contact COSMIC to obtain detailed information about the supported operating system and OSF/Motif releases required for each of these machine versions. An optional Motif Object Code License is available for the Sun4 version of TAE Plus 5.2. Version 5.1 of TAE Plus remains available for DEC VAX computers running VMS, HP9000 Series 300/400 computers running HP-UX, and HP 9000 Series 700/800 computers running HP-UX 8.x and X11/R4. Please contact COSMIC for details on these versions of TAE Plus.
NASA Technical Reports Server (NTRS)
Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)
2016-01-01
A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.
Implementing embedded artificial intelligence rules within algorithmic programming languages
NASA Technical Reports Server (NTRS)
Feyock, Stefan
1988-01-01
Most integrations of artificial intelligence (AI) capabilities with non-AI (usually FORTRAN-based) application programs require the latter to execute separately to run as a subprogram or, at best, as a coroutine, of the AI system. In many cases, this organization is unacceptable; instead, the requirement is for an AI facility that runs in embedded mode; i.e., is called as subprogram by the application program. The design and implementation of a Prolog-based AI capability that can be invoked in embedded mode are described. The significance of this system is twofold: Provision of Prolog-based symbol-manipulation and deduction facilities makes a powerful symbolic reasoning mechanism available to applications programs written in non-AI languages. The power of the deductive and non-procedural descriptive capabilities of Prolog, which allow the user to describe the problem to be solved, rather than the solution, is to a large extent vitiated by the absence of the standard control structures provided by other languages. Embedding invocations of Prolog rule bases in programs written in non-AI languages makes it possible to put Prolog calls inside DO loops and similar control constructs. The resulting merger of non-AI and AI languages thus results in a symbiotic system in which the advantages of both programming systems are retained, and their deficiencies largely remedied.
The NASA/IPAC Teacher Archive Research Program (NITARP): Lessons Learned
NASA Astrophysics Data System (ADS)
Rebull, Luisa M.; Gorjian, Varoujan; Squires, Gordon K.
2017-01-01
NITARP, the NASA/IPAC Teacher Archive Research Program, gets teachers involved in authentic astronomical research. We partner small groups of educators with a professional astronomer mentor for a year-long original research project. The teams echo the entire research process, from writing a proposal, to doing the research, to presenting the results at an American Astronomical Society (AAS) meeting. The program runs from January through January. Applications are available annually in May and are due in September. The educators’ experiences color their teaching for years to come, influencing hundreds of students per teacher. In support of other teams planning programs similar to NITARP, in this poster we present our top lessons learned from running NITARP for more than 10 years. Support is provided for NITARP by the NASA ADP program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorn, David L.
Code is written in Basic to run using web-available Basic assembler, available at justbasic.com. It drives a set of stepper motors to mechanize the operation of pipetting radioactive solutions within a hot cell, and it communicates via serial port with the C4 stepper controller sold by Arrick, see http://www.arrickrobotics.com/c4md2.html. It is intended to operate stand-alone, that is, the justbasic assembler/application is downloaded onto a PC, the application runs the software program Pipettor, and the instructions are included as comments within the software.
Multiple elastic scattering of electrons in condensed matter
NASA Astrophysics Data System (ADS)
Jablonski, A.
2017-01-01
Since the 1940s, much attention has been devoted to the problem of accurate theoretical description of electron transport in condensed matter. The needed information for describing different aspects of the electron transport is the angular distribution of electron directions after multiple elastic collisions. This distribution can be expanded into a series of Legendre polynomials with coefficients, Al. In the present work, a database of these coefficients for all elements up to uranium (Z=92) and a dense grid of electron energies varying from 50 to 5000 eV has been created. The database makes possible the following applications: (i) accurate interpolation of coefficients Al for any element and any energy from the above range, (ii) fast calculations of the differential and total elastic-scattering cross sections, (iii) determination of the angular distribution of directions after multiple collisions, (iv) calculations of the probability of elastic backscattering from solids, and (v) calculations of the calibration curves for determination of the inelastic mean free paths of electrons. The last two applications provide data with comparable accuracy to Monte Carlo simulations, yet the running time is decreased by several orders of magnitude. All of the above applications are implemented in the Fortran program MULTI_SCATT. Numerous illustrative runs of this program are described. Despite a relatively large volume of the database of coefficients Al, the program MULTI_SCATT can be readily run on personal computers.
An enhanced Ada run-time system for real-time embedded processors
NASA Technical Reports Server (NTRS)
Sims, J. T.
1991-01-01
An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.
The Error Reporting in the ATLAS TDAQ System
NASA Astrophysics Data System (ADS)
Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos
2015-05-01
The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one of the available class constructors and send this instance to ERS. This paper presents the original design solutions exploited for the ERS implementation and describes how it was used during the first ATLAS run period. The cross-system error reporting standardization introduced by ERS was one of the key points for the successful implementation of automated mechanisms for online error recovery.
IMGui-A Desktop GUI Application for Isolation with Migration Analyses.
Knoblauch, Jared; Sethuraman, Arun; Hey, Jody
2017-02-01
The Isolation with Migration (IM) programs (e.g., IMa2) have been utilized extensively by evolutionary biologists for model-based inference of demographic parameters including effective population sizes, migration rates, and divergence times. Here, we describe a graphical user interface for the latest IM program. IMGui provides a comprehensive set of tools for performing demographic analyses, tracking progress of runs, and visualizing results. Developed using node. js and the Electron framework, IMGui is an application that runs on any desktop operating system, and is available for download at https://github.com/jaredgk/IMgui-electron-packages. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASTRAN data deck generation on the PC
NASA Technical Reports Server (NTRS)
Guyan, R. J.
1986-01-01
Using two commercial programs an application was developed to aid in generating a run-ready NASTRAN data deck on the PC. Macros are used to access relevant reference material and card files while editing the deck. The application can be easily customized to suit individual or group needs.
Practical Application of Fundamental Concepts in Exercise Physiology
ERIC Educational Resources Information Center
Ramsbottom R.; Kinch, R. F. T.; Morris, M. G.; Dennis, A. M.
2007-01-01
The collection of primary data in laboratory classes enhances undergraduate practical and critical thinking skills. The present article describes the use of a lecture program, running in parallel with a series of linked practical classes, that emphasizes classical or standard concepts in exercise physiology. The academic and practical program ran…
75 FR 54587 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-08
... volunteer organizations to plan, develop, maintain and manage, where appropriate, trails and campground... will be unable to recruit and/or screen volunteer applicants or administer/run volunteer programs that...
Sakhteman, Amirhossein; Zare, Bijan
2016-01-01
An interactive application, Modelface, was presented for Modeller software based on windows platform. The application is able to run all steps of homology modeling including pdb to fasta generation, running clustal, model building and loop refinement. Other modules of modeler including energy calculation, energy minimization and the ability to make single point mutations in the PDB structures are also implemented inside Modelface. The API is a simple batch based application with no memory occupation and is free of charge for academic use. The application is also able to repair missing atom types in the PDB structures making it suitable for many molecular modeling studies such as docking and molecular dynamic simulation. Some successful instances of modeling studies using Modelface are also reported. PMID:28243276
NASA Technical Reports Server (NTRS)
Mcdill, Paul L.
1986-01-01
A test program, utilizing a large scale model, was run in the NASA Lewis Research Center 10- by 10-ft wind tunnel to examine the influence on performance of design parameters of turboprop S-duct inlet/diffuser systems. The parametric test program investigated inlet lip thickness, inlet/diffuser cross-sectional geometry, throat design Mach number, and shaft fairing shape. The test program was run at angles of attack to 15 deg and tunnel Mach numbers to 0.35. Results of the program indicate that current design techniques can be used to design inlet/diffuser systems with acceptable total pressure recovery, but several of the design parameters, notably lip thickness (contraction ratio) and shaft fairing cross section, must be optimized to prevent excessive distortion at the compressor face.
The Automated Instrumentation and Monitoring System (AIMS) reference manual
NASA Technical Reports Server (NTRS)
Yan, Jerry; Hontalas, Philip; Listgarten, Sherry
1993-01-01
Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).
From Operating-System Correctness to Pervasively Verified Applications
NASA Astrophysics Data System (ADS)
Daum, Matthias; Schirmer, Norbert W.; Schmidt, Mareike
Though program verification is known and has been used for decades, the verification of a complete computer system still remains a grand challenge. Part of this challenge is the interaction of application programs with the operating system, which is usually entrusted with retrieving input data from and transferring output data to peripheral devices. In this scenario, the correct operation of the applications inherently relies on operating-system correctness. Based on the formal correctness of our real-time operating system Olos, this paper describes an approach to pervasively verify applications running on top of the operating system.
2018-04-20
An MRAP armored vehicle goes through a training run on the Shuttle Landing Facility to support NASA's Commercial Crew Program at the agency's Kennedy Space Center in Florida. The 45,000-pound mine-resistant ambush protected vehicle, or MRAP, was originally designed for military applications. The MRAP offers a mobile bunker for astronauts and ground crews in the unlikely event they have to get away from the launch pad quickly in an emergency.
2018-04-20
Two MRAP armored vehicles go through a training run on the Shuttle Landing Facility to support NASA's Commercial Crew Program at the agency's Kennedy Space Center in Florida. The 45,000-pound mine-resistant ambush protected vehicle, or MRAPs, were originally designed for military applications. The MRAP offers a mobile bunker for astronauts and ground crews in the unlikely event they have to get away from the launch pad quickly in an emergency.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (DEC VAX ULTRIX VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.
TAE+ 5.1 - TRANSPORTABLE APPLICATIONS ENVIRONMENT PLUS, VERSION 5.1 (SUN3 VERSION WITH MOTIF)
NASA Technical Reports Server (NTRS)
TAE SUPPORT OFFICE
1994-01-01
TAE (Transportable Applications Environment) Plus is an integrated, portable environment for developing and running interactive window, text, and graphical object-based application systems. The program allows both programmers and non-programmers to easily construct their own custom application interface and to move that interface and application to different machine environments. TAE Plus makes both the application and the machine environment transparent, with noticeable improvements in the learning curve. The main components of TAE Plus are as follows: (1) the WorkBench, a What You See Is What You Get (WYSIWYG) tool for the design and layout of a user interface; (2) the Window Programming Tools Package (WPT), a set of callable subroutines that control an application's user interface; and (3) TAE Command Language (TCL), an easy-to-learn command language that provides an easy way to develop an executable application prototype with a run-time interpreted language. The WorkBench tool allows the application developer to interactively construct the layout of an application's display screen by manipulating a set of interaction objects including input items such as buttons, icons, and scrolling text lists. Data-driven graphical objects such as dials, thermometers, and strip charts are also included. TAE Plus updates the strip chart as the data values change. The WorkBench user specifies the windows and interaction objects that will make up the user interface, then specifies the sequence of the user interface dialogue. The description of the designed user interface is then saved into resource files. For those who desire to develop the designed user interface into an operational application, the WorkBench tool also generates source code (C, Ada, and TCL) which fully controls the application's user interface through function calls to the WPTs. The WPTs are the runtime services used by application programs to display and control the user interfaces. Since the WPTs access the workbench-generated resource files during each execution, details such as color, font, location, and object type remain independent from the application code, allowing changes to the user interface without recompiling and relinking. The Silicon Graphics version of TAE Plus now has a font caching scheme and a color caching scheme to make color allocation more efficient. In addition to WPTs, TAE Plus can control interaction of objects from the interpreted TAE Command Language. TCL provides an extremely powerful means for the more experienced developer to quickly prototype an application's use of TAE Plus interaction objects and add programming logic without the overhead of compiling or linking. TAE Plus requires MIT's X Window System, Version 11 Release 4, and the Open Software Foundation's Motif Toolkit 1.1 or 1.1.1. The Workbench and WPTs are written in C++ and the remaining code is written in C. TAE Plus is available by license for an unlimited time period. The licensed program product includes the TAE Plus source code and one set of supporting documentation. Additional documentation may be purchased separately at the price indicated below. The amount of disk space required to load the TAE Plus tar format tape is between 35Mb and 67Mb depending on the machine version. The recommended minimum memory is 12Mb. Each TAE Plus platform delivery tape includes pre-built libraries and executable binary code for that particular machine, as well as source code, so users do not have to do an installation. Users wishing to recompile the source will need both a C compiler and either GNU's C++ Version 1.39 or later, or a C++ compiler based on AT&T 2.0 cfront. TAE Plus comes with InterViews and idraw, two software packages developed by Stanford University and integrated in TAE Plus. TAE Plus was developed in 1989 and version 5.1 was released in 1991. TAE Plus is currently available on media suitable for eight different machine platforms: 1) DEC VAX computers running VMS 5.3 or higher (TK50 cartridge in VAX BACKUP format), 2) DEC VAXstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 3) DEC RISC workstations running ULTRIX 4.1 or later (TK50 cartridge in UNIX tar format), 4) HP9000 Series 300/400 computers running HP-UX 8.0 (.25 inch HP-preformatted tape cartridge in UNIX tar format), 5) HP9000 Series 700 computers running HP-UX 8.05 (HP 4mm DDS DAT tape cartridge in UNIX tar format), 6) Sun3 series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), 7) Sun4 (SPARC) series computers running SunOS 4.1.1 (.25 inch tape cartridge in UNIX tar format), and 8) SGI Indigo computers running IRIX 4.0.1 and IRIX/Motif 1.0.1 (.25 inch IRIS tape cartridge in UNIX tar format). An optional Motif Object Code License is available for either Sun version. TAE is a trademark of the National Aeronautics and Space Administration. X Window System is a trademark of the Massachusetts Institute of Technology. Motif is a trademark of the Open Software Foundation. DEC, VAX, VMS, TK50 and ULTRIX are trademarks of Digital Equipment Corporation. HP9000 and HP-UX are trademarks of Hewlett-Packard Co. Sun3, Sun4, SunOS, and SPARC are trademarks of Sun Microsystems, Inc. SGI and IRIS are registered trademarks of Silicon Graphics, Inc.
45 CFR 305.32 - Requirements applicable to calculations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (CHILD SUPPORT ENFORCEMENT PROGRAM), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND... Federal fiscal year runs from October 1st of one year through September 30th of the following year. (b...
Eternal Sunshine of the Spotless Machine: Protecting Privacy with Ephemeral Channels
Dunn, Alan M.; Lee, Michael Z.; Jana, Suman; Kim, Sangman; Silberstein, Mark; Xu, Yuanzhong; Shmatikov, Vitaly; Witchel, Emmett
2014-01-01
Modern systems keep long memories. As we show in this paper, an adversary who gains access to a Linux system, even one that implements secure deallocation, can recover the contents of applications’ windows, audio buffers, and data remaining in device drivers—long after the applications have terminated. We design and implement Lacuna, a system that allows users to run programs in “private sessions.” After the session is over, all memories of its execution are erased. The key abstraction in Lacuna is an ephemeral channel, which allows the protected program to talk to peripheral devices while making it possible to delete the memories of this communication from the host. Lacuna can run unmodified applications that use graphics, sound, USB input devices, and the network, with only 20 percentage points of additional CPU utilization. PMID:24755709
Providing Assistive Technology Applications as a Service Through Cloud Computing.
Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio
2015-01-01
Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.
Virtual Frame Buffer Interface Program
NASA Technical Reports Server (NTRS)
Wolfe, Thomas L.
1990-01-01
Virtual Frame Buffer Interface program makes all frame buffers appear as generic frame buffer with specified set of characteristics, allowing programmers to write codes that run unmodified on all supported hardware. Converts generic commands to actual device commands. Consists of definition of capabilities and FORTRAN subroutines called by application programs. Developed in FORTRAN 77 for DEC VAX 11/780 or DEC VAX 11/750 computer under VMS 4.X.
A performance comparison of the Cray-2 and the Cray X-MP
NASA Technical Reports Server (NTRS)
Schmickley, Ronald; Bailey, David H.
1986-01-01
A suite of thirteen large Fortran benchmark codes were run on Cray-2 and Cray X-MP supercomputers. These codes were a mix of compute-intensive scientific application programs (mostly Computational Fluid Dynamics) and some special vectorized computation exercise programs. For the general class of programs tested on the Cray-2, most of which were not specially tuned for speed, the floating point operation rates varied under a variety of system load configurations from 40 percent up to 125 percent of X-MP performance rates. It is concluded that the Cray-2, in the original system configuration studied (without memory pseudo-banking) will run untuned Fortran code, on average, about 70 percent of X-MP speeds.
Program Aids Visualization Of Data
NASA Technical Reports Server (NTRS)
Truong, L. V.
1995-01-01
Living Color Frame System (LCFS) computer program developed to solve some problems that arise in connection with generation of real-time graphical displays of numerical data and of statuses of systems. Need for program like LCFS arises because computer graphics often applied for better understanding and interpretation of data under observation and these graphics become more complicated when animation required during run time. Eliminates need for custom graphical-display software for application programs. Written in Turbo C++.
NASA Technical Reports Server (NTRS)
Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam;
2009-01-01
The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.
Johnsen Lind, Andreas; Helge Johnsen, Bjorn; Hill, Labarron K; Sollers Iii, John J; Thayer, Julian F
2011-01-01
The aim of the present manuscript is to present a user-friendly and flexible platform for transforming Kubios HRV output files to an .xls-file format, used by MS Excel. The program utilizes either native or bundled Java and is platform-independent and mobile. This means that it can run without being installed on a computer. It also has an option of continuous transferring of data indicating that it can run in the background while Kubios produces output files. The program checks for changes in the file structure and automatically updates the .xls- output file.
An improved viscous characteristics analysis program
NASA Technical Reports Server (NTRS)
Jenkins, R. V.
1978-01-01
An improved two dimensional characteristics analysis program is presented. The program is built upon the foundation of a FORTRAN program entitled Analysis of Supersonic Combustion Flow Fields With Embedded Subsonic Regions. The major improvements are described and a listing of the new program is provided. The subroutines and their functions are given as well as the input required for the program. Several applications of the program to real problems are qualitatively described. Three runs obtained in the investigation of a real problem are presented to provide insight for the input and output of the program.
Models@Home: distributed computing in bioinformatics using a screensaver based approach.
Krieger, Elmar; Vriend, Gert
2002-02-01
Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
2001-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
Method for resource control in parallel environments using program organization and run-time support
NASA Technical Reports Server (NTRS)
Ekanadham, Kattamuri (Inventor); Moreira, Jose Eduardo (Inventor); Naik, Vijay Krishnarao (Inventor)
1999-01-01
A system and method for dynamic scheduling and allocation of resources to parallel applications during the course of their execution. By establishing well-defined interactions between an executing job and the parallel system, the system and method support dynamic reconfiguration of processor partitions, dynamic distribution and redistribution of data, communication among cooperating applications, and various other monitoring actions. The interactions occur only at specific points in the execution of the program where the aforementioned operations can be performed efficiently.
A Comparison of Three Programming Models for Adaptive Applications
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)
2000-01-01
We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-28
...) matching grant for the 2011 grant cycle (the 2011 grant cycle runs January 1, 2011, through December 31.... Based on the findings of this assessment, for the 2011 grant cycle, the LITC Program Office is... currently receiving a grant for the 2010 grant cycle, or (2) organizations servicing the following counties...
Bringing Interactivity to the Web: The JAVA Solution.
ERIC Educational Resources Information Center
Knee, Richard H.; Cafolla, Ralph
Java is an object-oriented programming language of the Internet. It's popularity lies in its ability to create interactive Web sites across platforms. The most common Java programs are applications and applets, which adhere to a set of conventions that lets them run within a Java-compatible browser. Java is becoming an essential subject matter and…
40 CFR 86.1920 - What in-use testing information must I report to EPA?
Code of Federal Regulations, 2010 CFR
2010-07-01
... type or application (such as delivery, line haul, or dump truck). Also, identify the type of trailer... (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES (CONTINUED) Manufacturer-Run In-Use Testing Program for Heavy-Duty Diesel Engines § 86.1920 What in-use...
Evaluating SPLASH-2 Applications Using MapReduce
NASA Astrophysics Data System (ADS)
Zhu, Shengkai; Xiao, Zhiwei; Chen, Haibo; Chen, Rong; Zhang, Weihua; Zang, Binyu
MapReduce has been prevalent for running data-parallel applications. By hiding other non-functionality parts such as parallelism, fault tolerance and load balance from programmers, MapReduce significantly simplifies the programming of large clusters. Due to the mentioned features of MapReduce above, researchers have also explored the use of MapReduce on other application domains, such as machine learning, textual retrieval and statistical translation, among others.
NASA Technical Reports Server (NTRS)
Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.
2012-01-01
This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.
Windows Program For Driving The TDU-850 Printer
NASA Technical Reports Server (NTRS)
Parrish, Brett T.
1995-01-01
Program provides WYSIWYG compatibility between video display and printout. PDW is Microsoft Windows printer-driver computer program for use with Raytheon TDU-850 printer. Provides previously unavailable linkage between printer and IBM PC-compatible computers running Microsoft Windows. Enhances capabilities of Raytheon TDU-850 hardcopier by emulating all textual and graphical features normally supported by laser/ink-jet printers and makes printer compatible with any Microsoft Windows application. Also provides capabilities not found in laser/ink-jet printer drivers by providing certain Windows applications with ability to render high quality, true gray-scale photographic hardcopy on TDU-850. Written in C language.
It's Money! Real-World Grant Experience through a Student-Run, Peer-Reviewed Program
Dumanis, Sonya B.; Ullrich, Lauren; Washington, Patricia M.; Forcelli, Patrick A.
2013-01-01
Grantsmanship is an integral component of surviving and thriving in academic science, especially in the current funding climate. Therefore, any additional opportunities to write, read, and review grants during graduate school may have lasting benefits on one's career. We present here our experience with a small, student-run grant program at Georgetown University Medical Center. Founded in 2010, this program has several goals: 1) to give graduate students an opportunity to conduct small, independent research projects; 2) to encourage graduate students to write grants early and often; and 3) to give graduate students an opportunity to review grants. In the 3 yr since the program's start, 28 applications have been submitted, 13 of which were funded for a total of $40,000. From funded grants, students have produced abstracts and manuscripts, generated data to support subsequent grant proposals, and made new professional contacts with collaborators. Above and beyond financial support, this program provided both applicants and reviewers an opportunity to improve their writing skills, professional development, and understanding of the grants process, as reflected in the outcome measures presented. With a small commitment of time and funding, other institutions could implement a program like this to the benefit of their graduate students. PMID:24006391
Portable Medical Laboratory Applications Software
Silbert, Jerome A.
1983-01-01
Portability implies that a program can be run on a variety of computers with minimal software revision. The advantages of portability are outlined and design considerations for portable laboratory software are discussed. Specific approaches for achieving this goal are presented.
NASA Technical Reports Server (NTRS)
Bown, Rodney L. (Editor)
1986-01-01
Topics discussed include: test and verification; environment issues; distributed Ada issues; life cycle issues; Ada in Europe; management/training issues; common Ada interface set; and run time issues.
BASIC Data Manipulation And Display System (BDMADS)
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1983-01-01
BDMADS, a BASIC Data Manipulation and Display System, is a collection of software programs that run on an Apple II Plus personal computer. BDMADS provides a user-friendly environment for the engineer in which to perform scientific data processing. The computer programs and their use are described. Jet engine performance calculations are used to illustrate the use of BDMADS. Source listings of the BDMADS programs are provided and should permit users to customize the programs for their particular applications.
The Long-Run Impact of Cash Transfers to Poor Families†
Aizer, Anna; Eli, Shari; Ferrie, Joseph; Lleras-Muney, Adriana
2017-01-01
We estimate the long-run impact of cash transfers to poor families on children’s longevity, educational attainment, nutritional status, and income in adulthood. To do so, we collected individual-level administrative records of applicants to the Mothers’ Pension program—the first government-sponsored welfare program in the United States (1911–1935)—and matched them to census, WWII, and death records. Male children of accepted applicants lived one year longer than those of rejected mothers. They also obtained one-third more years of schooling, were less likely to be underweight, and had higher income in adulthood than children of rejected mothers. PMID:28713169
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
General-Purpose Ada Software Packages
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.
1991-01-01
Collection of subprograms brings to Ada many features from other programming languages. All generic packages designed to be easily instantiated for types declared in user's facility. Most packages have widespread applicability, although some oriented for avionics applications. All designed to facilitate writing new software in Ada. Written on IBM/AT personal computer running under PC DOS, v.3.1.
A C++ Thread Package for Concurrent and Parallel Programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jie Chen; William Watson
1999-11-01
Recently thread libraries have become a common entity on various operating systems such as Unix, Windows NT and VxWorks. Those thread libraries offer significant performance enhancement by allowing applications to use multiple threads running either concurrently or in parallel on multiprocessors. However, the incompatibilities between native libraries introduces challenges for those who wish to develop portable applications.
2018-04-20
Following a training run on the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida, MRAP back doors are opened showing seating in the armored vehicle. The 45,000-pound mine-resistant ambush protected vehicle, or MRAP, was originally designed for military applications, but will support the agency's Commercial Crew Program at the spaceport. The MRAP offers a mobile bunker for astronauts and ground crews in the unlikely event they have to get away from the launch pad quickly in an emergency.
Benchmarks for target tracking
NASA Astrophysics Data System (ADS)
Dunham, Darin T.; West, Philip D.
2011-09-01
The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.
Kluitenberg, Bas; van Middelkoop, Marienke; Diercks, Ron L; Hartgens, Fred; Verhagen, Evert; Smits, Dirk-Wouter; Buist, Ida; van der Worp, Henk
2013-07-26
Running is associated with desirable lifestyle changes. Therefore several initiatives have been undertaken to promote running. Exact data on the health effects as a result of participating in a short-term running promotion program, however, is scarce. One important reason for dropout from a running program is a running-related injury (RRI). The incidence of RRIs is high, especially in novice runners. Several studies examined potential risk factors for RRIs, however, due to the often underpowered studies it is not possible to reveal the complex mechanism leading to an RRI yet.The primary objectives are to determine short- and long-term health effects of a nationwide "Start to Run" program and to identify determinants for RRIs in novice runners. Secondary objectives include examining reasons and determinants for dropout, medical consumption and economical consequences of RRIs as a result of a running promotion program. The NLstart2run study is a multi-center prospective cohort study with a follow-up at 6, 12, 24 and 52 weeks. All participants that sign up for the Start to Run program in 2013, which is offered by the Dutch Athletics Federation, will be asked to participate in the study.During the running program a digital running log will be completed by the participants every week to administer exposure and running related pain. After the running program the log will be completed every second week. An RRI is defined as any musculoskeletal ailment of the lower extremity or back that the participant attributed to running and hampers running ability for at least one week. The NLstart2run study will provide insight into the short- and long-term health effects as a result of a short-term running promotion program. Reasons and determinants for dropout from a running promotion program will be examined as well. The study will result in several leads for future RRI prevention and as a result minimize dropout due to injury. This information may increase the effectiveness of future running promotion programs and will thereby contribute positively to public health. The Netherlands National Trial Register NTR3676. The NTR is part of the WHO Primary Registries.
TCP/IP Interface for the Satellite Orbit Analysis Program (SOAP)
NASA Technical Reports Server (NTRS)
Carnright, Robert; Stodden, David; Coggi, John
2009-01-01
The Transmission Control Protocol/ Internet protocol (TCP/IP) interface for the Satellite Orbit Analysis Program (SOAP) provides the means for the software to establish real-time interfaces with other software. Such interfaces can operate between two programs, either on the same computer or on different computers joined by a network. The SOAP TCP/IP module employs a client/server interface where SOAP is the server and other applications can be clients. Real-time interfaces between software offer a number of advantages over embedding all of the common functionality within a single program. One advantage is that they allow each program to divide the computation labor between processors or computers running the separate applications. Secondly, each program can be allowed to provide its own expertise domain with other programs able to use this expertise.
2013-01-01
Background Running is associated with desirable lifestyle changes. Therefore several initiatives have been undertaken to promote running. Exact data on the health effects as a result of participating in a short-term running promotion program, however, is scarce. One important reason for dropout from a running program is a running-related injury (RRI). The incidence of RRIs is high, especially in novice runners. Several studies examined potential risk factors for RRIs, however, due to the often underpowered studies it is not possible to reveal the complex mechanism leading to an RRI yet. The primary objectives are to determine short- and long-term health effects of a nationwide “Start to Run” program and to identify determinants for RRIs in novice runners. Secondary objectives include examining reasons and determinants for dropout, medical consumption and economical consequences of RRIs as a result of a running promotion program. Methods/design The NLstart2run study is a multi-center prospective cohort study with a follow-up at 6, 12, 24 and 52 weeks. All participants that sign up for the Start to Run program in 2013, which is offered by the Dutch Athletics Federation, will be asked to participate in the study. During the running program a digital running log will be completed by the participants every week to administer exposure and running related pain. After the running program the log will be completed every second week. An RRI is defined as any musculoskeletal ailment of the lower extremity or back that the participant attributed to running and hampers running ability for at least one week. Discussion The NLstart2run study will provide insight into the short- and long-term health effects as a result of a short-term running promotion program. Reasons and determinants for dropout from a running promotion program will be examined as well. The study will result in several leads for future RRI prevention and as a result minimize dropout due to injury. This information may increase the effectiveness of future running promotion programs and will thereby contribute positively to public health. Trial registration The Netherlands National Trial Register NTR3676. The NTR is part of the WHO Primary Registries. PMID:23890182
Augmenting Research, Education, and Outreach with Client-Side Web Programming.
Abriata, Luciano A; Rodrigues, João P G L M; Salathé, Marcel; Patiny, Luc
2018-05-01
The evolution of computing and web technologies over the past decade has enabled the development of fully fledged scientific applications that run directly on web browsers. Powered by JavaScript, the lingua franca of web programming, these 'web apps' are starting to revolutionize and democratize scientific research, education, and outreach. Copyright © 2017 Elsevier Ltd. All rights reserved.
Children's Fitness. Managing a Running Program.
ERIC Educational Resources Information Center
Hinkle, J. Scott; Tuckman, Bruce W.
1987-01-01
A running program to increase the cardiovascular fitness levels of fourth-, fifth-, and sixth-grade children is described. Discussed are the running environment, implementation of a running program, feedback, and reinforcement. (MT)
Virtualizing access to scientific applications with the Application Hosting Environment
NASA Astrophysics Data System (ADS)
Zasada, S. J.; Coveney, P. V.
2009-12-01
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.
Kopelman, Naama M; Mayzel, Jonathan; Jakobsson, Mattias; Rosenberg, Noah A; Mayrose, Itay
2015-09-01
The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present Clumpak (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, Clumpak identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software Clumpp. Next, Clumpak identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in Clumpp and simplifying the comparison of clustering results across different K values. Clumpak incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology. © 2015 John Wiley & Sons Ltd.
EPA's Office of Pesticide Programs is soliciting applications for a cooperative agreement to run the National Pesticide Information Center (NPIC), which provides the public with objective, science-based information on pesticide-related subjects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kight, H R
1979-11-01
Computerized methods of monitoring process functions and alarming off-standard conditions were implemented and demonstrated during the FY 1979 Uranium Run. In addition, prototype applications of instruments for the purpose of tamper indication and surveillance were tested.
Architecture-Adaptive Computing Environment: A Tool for Teaching Parallel Programming
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2002-01-01
Recently, networked and cluster computation have become very popular. This paper is an introduction to a new C based parallel language for architecture-adaptive programming, aCe C. The primary purpose of aCe (Architecture-adaptive Computing Environment) is to encourage programmers to implement applications on parallel architectures by providing them the assurance that future architectures will be able to run their applications with a minimum of modification. A secondary purpose is to encourage computer architects to develop new types of architectures by providing an easily implemented software development environment and a library of test applications. This new language should be an ideal tool to teach parallel programming. In this paper, we will focus on some fundamental features of aCe C.
Estimating aquifer transmissivity from specific capacity using MATLAB.
McLin, Stephen G
2005-01-01
Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.
Program CONTRAST--A general program for the analysis of several survival or recovery rate estimates
Hines, J.E.; Sauer, J.R.
1989-01-01
This manual describes the use of program CONTRAST, which implements a generalized procedure for the comparison of several rate estimates. This method can be used to test both simple and composite hypotheses about rate estimates, and we discuss its application to multiple comparisons of survival rate estimates. Several examples of the use of program CONTRAST are presented. Program CONTRAST will run on IBM-cimpatible computers, and requires estimates of the rates to be tested, along with associated variance and covariance estimates.
Hybrid cryptosystem for image file using elgamal and double playfair cipher algorithm
NASA Astrophysics Data System (ADS)
Hardi, S. M.; Tarigan, J. T.; Safrina, N.
2018-03-01
In this paper, we present an implementation of an image file encryption using hybrid cryptography. We chose ElGamal algorithm to perform asymmetric encryption and Double Playfair for the symmetric encryption. Our objective is to show that these algorithms are capable to encrypt an image file with an acceptable running time and encrypted file size while maintaining the level of security. The application was built using C# programming language and ran as a stand alone desktop application under Windows Operating System. Our test shows that the system is capable to encrypt an image with a resolution of 500×500 to a size of 976 kilobytes with an acceptable running time.
Non-volatile memory for checkpoint storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.
A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, themore » non-volatile memory is a pluggable flash memory card.« less
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kopp, H.J.; Mortensen, G.A.
1978-04-01
Approximately 60% of the full CDC 6600/7600 Datatran 2.0 capability was made operational on IBM 360/370 equipment. Sufficient capability was made operational to demonstrate adequate performance for modular program linking applications. Also demonstrated were the basic capabilities and performance required to support moderate-sized data base applications and moderately active scratch input/output applications. Approximately one to two calendar years are required to develop DATATRAN 2.0 capabilities fully for the entire spectrum of applications proposed. Included in the next stage of conversion should be syntax checking and syntax conversion features that would foster greater FORTRAN compatibility between IBM and CDC developed modules.more » The batch portion of the JOSHUA Modular System, which was developed by Savannah River Laboratory to run on an IBM computer, was examined for the feasibility of conversion to run on a Control Data Corporation (CDC) computer. Portions of the JOSHUA Precompiler were changed so as to be operable on the CDC computer. The Data Manager and Batch Monitor were also examined for conversion feasibility, but no changes were made in them. It appears to be feasible to convert the batch portion of the JOSHUA Modular System to run on a CDC computer with an estimated additional two to three man-years of effort. 9 tables.« less
A Running Start: Resource Guide for Youth Running Programs
ERIC Educational Resources Information Center
Jenny, Seth; Becker, Andrew; Armstrong, Tess
2016-01-01
The lack of physical activity is an epidemic problem among American youth today. In order to combat this, many schools are incorporating youth running programs as a part of their comprehensive school physical activity programs. These youth running programs are being implemented before or after school, at school during recess at the elementary…
MaMR: High-performance MapReduce programming model for material cloud applications
NASA Astrophysics Data System (ADS)
Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng
2017-02-01
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.
SU-E-J-114: Web-Browser Medical Physics Applications Using HTML5 and Javascript.
Bakhtiari, M
2012-06-01
Since 2010, there has been a great attention about HTML5. Application developers and browser makers fully embrace and support the web of the future. Consumers have started to embrace HTML5, especially as more users understand the benefits and potential that HTML5 can mean for the future.Modern browsers such as Firefox, Google Chrome, and Safari are offering better and more robust support for HTML5, CSS3, and JavaScript. The idea is to introduce the HTML5 to medical physics community for open source software developments. The benefit of using HTML5 is developing portable software systems. The HTML5, CSS, and JavaScript programming languages were used to develop several applications for Quality Assurance in radiation therapy. The canvas element of HTML5 was used for handling and displaying the images, and JavaScript was used to manipulate the data. Sample application were developed to: 1. analyze the flatness and symmetry of the radiotherapy fields in a web browser, 2.analyze the Dynalog files from Varian machines, 3. visualize the animated Dynamic MLC files, 4. Simulation via Monte Carlo, and 5. interactive image manipulation. The programs showed great performance and speed in uploading the data and displaying the results. The flatness and symmetry program and Dynalog file analyzer ran in a fraction of second. The reason behind this performance is using JavaScript language which is a lower level programming language in comparison to the most of the scientific programming packages such as Matlab. The second reason is that JavaScript runs locally on client side computers not on the web-servers. HTML5 and JavaScript can be used to develop useful applications that can be run online or offline on different modern web-browsers. The programming platform can be also one of the modern web-browsers which are mostly open source (such as Firefox). © 2012 American Association of Physicists in Medicine.
Reasons and predictors of discontinuation of running after a running program for novice runners.
Fokkema, Tryntsje; Hartgens, Fred; Kluitenberg, Bas; Verhagen, Evert; Backx, Frank J G; van der Worp, Henk; Bierma-Zeinstra, Sita M A; Koes, Bart W; van Middelkoop, Marienke
2018-06-18
To determine the proportion of participants of a running program for novice runners that discontinued running and investigate the main reasons to discontinue and characteristics associated with discontinuation. Prospective cohort study. The study included 774 participants of Start to Run, a 6-week running program for novice runners. Before the start of the program, participants filled-in a baseline questionnaire to collect information on demographics, physical activity and perceived health. The 26-weeks follow-up questionnaire was used to obtain information on the continuation of running (yes/no) and main reasons for discontinuation. To determine predictors for discontinuation of running, multivariable logistic regression was performed. Within 26 weeks after the start of the 6-week running program, 29.5% of the novice runners (n=225) had stopped running. The main reason for discontinuation was a running-related injury (n=108, 48%). Being female (OR 1.74; 95% CI 1.13-2.68), being unsure about the continuation of running after the program (OR 2.06; 95% CI 1.31-3.24) and (almost) no alcohol use (OR 1.62; 95%CI 1.11-2.37) were associated with a higher chance of discontinuation of running. Previous running experience less than one year previously (OR 0.46; 95% CI 0.26-0.83) and a higher score on the RAND-36 subscale physical functioning (OR 0.98; 95% CI 0.96-0.99) were associated with a lower chance of discontinuation. In this group of novice runners, almost one-third stopped running within six months. A running-related injury was the main reason to stop running. Women with a low perceived physical functioning and without running experience were prone to discontinue running. Copyright © 2018. Published by Elsevier Ltd.
YAMM - YET ANOTHER MENU MANAGER
NASA Technical Reports Server (NTRS)
Mazer, A. S.
1994-01-01
One of the most time-consuming yet necessary tasks of writing any piece of interactive software is the development of a user interface. Yet Another Menu Manager, YAMM, is an application independent menuing package, designed to remove much of the difficulty and save much of the time inherent in the implementation of the front ends for large packages. Written in C for UNIX-based operating systems, YAMM provides a complete menuing front end for a wide variety of applications, with provisions for terminal independence, user-specific configurations, and dynamic creation of menu trees. Applications running under the menu package consists of two parts: a description of the menu configuration and the body of application code. The menu configuration is used at runtime to define the menu structure and any non-standard keyboard mappings and terminal capabilities. Menu definitions define specific menus within the menu tree. The names used in a definition may be either a reference to an application function or the name of another menu defined within the menu configuration. Application parameters are entered using data entry screens which allow for required and optional parameters, tables, and legal-value lists. Both automatic and application-specific error checking are available. Help is available for both menu operation and specific applications. The YAMM program was written in C for execution on a Sun Microsystems workstation running SunOS, based on the Berkeley (4.2bsd) version of UNIX. During development, YAMM has been used on both 68020 and SPARC architectures, running SunOS versions 3.5 and 4.0. YAMM should be portable to most other UNIX-based systems. It has a central memory requirement of approximately 232K bytes. The standard distribution medium for this program is one .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. YAMM was developed in 1988 and last updated in 1990. YAMM is a copyrighted work with all copyright vested in NASA.
NASA Astrophysics Data System (ADS)
Coudarcher, Rémi; Duculty, Florent; Serot, Jocelyn; Jurie, Frédéric; Derutin, Jean-Pierre; Dhome, Michel
2005-12-01
SKiPPER is a SKeleton-based Parallel Programming EnviRonment being developed since 1996 and running at LASMEA Laboratory, the Blaise-Pascal University, France. The main goal of the project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This paper deals with the special features embedded in the latest version of the project: algorithmic skeleton nesting capabilities and a fully dynamic operating model. Throughout the case study of a complete and realistic image processing application, in which we have pointed out the requirement for skeleton nesting, we are presenting the operating model of this feature. The work described here is one of the few reported experiments showing the application of skeleton nesting facilities for the parallelisation of a realistic application, especially in the area of image processing. The image processing application we have chosen is a 3D face-tracking algorithm from appearance.
Kathy Dale
2005-01-01
Since 1998, Audubon's Christmas Bird Count (CBC) has been supported by an Internet-based data entry application that was initially designed to accommodate the traditional paper-based methods of this long-running bird monitoring program. The first efforts to computerize the data and the entry procedures have informed a planned strategy to revise the current...
Simulating Behavioural Interviews Using Synchronous Communication Software: Elluminate Live
ERIC Educational Resources Information Center
Sponza, Maria
2011-01-01
This practice application brief reviews the preparation, implementation, and evaluation of running behavioural interviews online. In a collaborative program between the School of Information Technology and the Careers and Employment service at Deakin University in Australia, students demonstrated their ability to articulate their generic…
Web Services Provide Access to SCEC Scientific Research Application Software
NASA Astrophysics Data System (ADS)
Gupta, N.; Gupta, V.; Okaya, D.; Kamb, L.; Maechling, P.
2003-12-01
Web services offer scientific communities a new paradigm for sharing research codes and communicating results. While there are formal technical definitions of what constitutes a web service, for a user community such as the Southern California Earthquake Center (SCEC), we may conceptually consider a web service to be functionality provided on-demand by an application which is run on a remote computer located elsewhere on the Internet. The value of a web service is that it can (1) run a scientific code without the user needing to install and learn the intricacies of running the code; (2) provide the technical framework which allows a user's computer to talk to the remote computer which performs the service; (3) provide the computational resources to run the code; and (4) bundle several analysis steps and provide the end results in digital or (post-processed) graphical form. Within an NSF-sponsored ITR project coordinated by SCEC, we are constructing web services using architectural protocols and programming languages (e.g., Java). However, because the SCEC community has a rich pool of scientific research software (written in traditional languages such as C and FORTRAN), we also emphasize making existing scientific codes available by constructing web service frameworks which wrap around and directly run these codes. In doing so we attempt to broaden community usage of these codes. Web service wrapping of a scientific code can be done using a "web servlet" construction or by using a SOAP/WSDL-based framework. This latter approach is widely adopted in IT circles although it is subject to rapid evolution. Our wrapping framework attempts to "honor" the original codes with as little modification as is possible. For versatility we identify three methods of user access: (A) a web-based GUI (written in HTML and/or Java applets); (B) a Linux/OSX/UNIX command line "initiator" utility (shell-scriptable); and (C) direct access from within any Java application (and with the correct API interface from within C++ and/or C/Fortran). This poster presentation will provide descriptions of the following selected web services and their origin as scientific application codes: 3D community velocity models for Southern California, geocoordinate conversions (latitude/longitude to UTM), execution of GMT graphical scripts, data format conversions (Gocad to Matlab format), and implementation of Seismic Hazard Analysis application programs that calculate hazard curve and hazard map data sets.
Next generation simulation tools: the Systems Biology Workbench and BioSPICE integration.
Sauro, Herbert M; Hucka, Michael; Finney, Andrew; Wellock, Cameron; Bolouri, Hamid; Doyle, John; Kitano, Hiroaki
2003-01-01
Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components--written in diverse programming languages and running on different platforms--to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities.
Magic cards: a new augmented-reality approach.
Demuynck, Olivier; Menendez, José Manuel
2013-01-01
Augmented reality (AR) commonly uses markers for detection and tracking. Such multimedia applications associate each marker with a virtual 3D model stored in the memory of the camera-equipped device running the application. Application users are limited in their interactions, which require knowing how to design and program 3D objects. This generally prevents them from developing their own entertainment AR applications. The Magic Cards application solves this problem by offering an easy way to create and manage an unlimited number of virtual objects that are encoded on special markers.
ARCGRAPH SYSTEM - AMES RESEARCH GRAPHICS SYSTEM
NASA Technical Reports Server (NTRS)
Hibbard, E. A.
1994-01-01
Ames Research Graphics System, ARCGRAPH, is a collection of libraries and utilities which assist researchers in generating, manipulating, and visualizing graphical data. In addition, ARCGRAPH defines a metafile format that contains device independent graphical data. This file format is used with various computer graphics manipulation and animation packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). In its full configuration, the ARCGRAPH system consists of a two stage pipeline which may be used to output graphical primitives. Stage one is associated with the graphical primitives (i.e. moves, draws, color, etc.) along with the creation and manipulation of the metafiles. Five distinct data filters make up stage one. They are: 1) PLO which handles all 2D vector primitives, 2) POL which handles all 3D polygonal primitives, 3) RAS which handles all 2D raster primitives, 4) VEC which handles all 3D raster primitives, and 5) PO2 which handles all 2D polygonal primitives. Stage two is associated with the process of displaying graphical primitives on a device. To generate the various graphical primitives, create and reprocess ARCGRAPH metafiles, and access the device drivers in the VDI (Video Device Interface) library, users link their applications to ARCGRAPH's GRAFIX library routines. Both FORTRAN and C language versions of the GRAFIX and VDI libraries exist for enhanced portability within these respective programming environments. The ARCGRAPH libraries were developed on a VAX running VMS. Minor documented modification of various routines, however, allows the system to run on the following computers: Cray X-MP running COS (no C version); Cray 2 running UNICOS; DEC VAX running BSD 4.3 UNIX, or Ultrix; SGI IRIS Turbo running GL2-W3.5 and GL2-W3.6; Convex C1 running UNIX; Amhdahl 5840 running UTS; Alliant FX8 running UNIX; Sun 3/160 running UNIX (no native device driver); Stellar GS1000 running Stellex (no native device driver); and an SGI IRIS 4D running IRIX (no native device driver). Currently with version 7.0 of ARCGRAPH, the VDI library supports the following output devices: A VT100 terminal with a RETRO-GRAPHICS board installed, a VT240 using the Tektronix 4010 emulation capability, an SGI IRIS turbo using the native GL2 library, a Tektronix 4010, a Tektronix 4105, and the Tektronix 4014. ARCGRAPH version 7.0 was developed in 1988.
Mobile environment for an emission spectrometer
NASA Astrophysics Data System (ADS)
Radziak, Kamil; Litwin, Dariusz; Galas, Jacek; Tyburska-Staniewska, Anna; Ramsza, Andrzej
2017-08-01
The paper describes a mobile application to be used in a chemical analytical laboratory. The program running under the control of Android operating system allows for preview of measurements recorded by the emission spectrometer. Another part of the application monitors operational and configuration parameters of the device in real time. The first part of this paper includes an overview of the atomic spectrometry. The second part contains a description of the application and its further potential development direction.
GASPRNG: GPU accelerated scalable parallel random number generator library
NASA Astrophysics Data System (ADS)
Gao, Shuang; Peterson, Gregory D.
2013-04-01
Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.
1988-09-01
software programs capable of being used on a microcomputer will be considered for analysis. No software intended for use on a miniframe or mainframe...Dial-A-Log consists of a program written in a computer language called L-10 that is run on a DEC-20 miniframe . The combination of the specific...proliferation of software dealing with microcomputers. Instead, they were geared more towards managing the use of miniframe or mainframe computer
Static analysis techniques for semiautomatic synthesis of message passing software skeletons
Sottile, Matthew; Dagit, Jason; Zhang, Deli; ...
2015-06-29
The design of high-performance computing architectures demands performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a “program skeleton” that we discuss in this article is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed formore » the purposes of the skeleton. In this work, we develop a semiautomatic approach for extracting program skeletons based on compiler program analysis. Finally, we demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator.« less
Experiments with microcomputer-based artificial intelligence environments
Summers, E.G.; MacDonald, R.A.
1988-01-01
The U.S. Geological Survey (USGS) has been experimenting with the use of relatively inexpensive microcomputers as artificial intelligence (AI) development environments. Several AI languages are available that perform fairly well on desk-top personal computers, as are low-to-medium cost expert system packages. Although performance of these systems is respectable, their speed and capacity limitations are questionable for serious earth science applications foreseen by the USGS. The most capable artificial intelligence applications currently are concentrated on what is known as the "artificial intelligence computer," and include Xerox D-series, Tektronix 4400 series, Symbolics 3600, VAX, LMI, and Texas Instruments Explorer. The artificial intelligence computer runs expert system shells and Lisp, Prolog, and Smalltalk programming languages. However, these AI environments are expensive. Recently, inexpensive 32-bit hardware has become available for the IBM/AT microcomputer. USGS has acquired and recently completed Beta-testing of the Gold Hill Systems 80386 Hummingboard, which runs Common Lisp on an IBM/AT microcomputer. Hummingboard appears to have the potential to overcome many of the speed/capacity limitations observed with AI-applications on standard personal computers. USGS is a Beta-test site for the Gold Hill Systems GoldWorks expert system. GoldWorks combines some high-end expert system shell capabilities in a medium-cost package. This shell is developed in Common Lisp, runs on the 80386 Hummingboard, and provides some expert system features formerly available only on AI-computers including frame and rule-based reasoning, on-line tutorial, multiple inheritance, and object-programming. ?? 1988 International Association for Mathematical Geology.
ERIC Educational Resources Information Center
Smith, Karl
2014-01-01
Since 1990, high school students in Washington have had the choice of earning college credit through the Running Start program. Running start is a dual enrollment and dual credit program that allows eleventh and twelfth grade high school students to take college courses at any of Washington's 34 community and technical colleges, Central Washington…
NASA Technical Reports Server (NTRS)
Meyer, Donald; Uchenik, Igor
2007-01-01
The PPC750 Performance Monitor (Perfmon) is a computer program that helps the user to assess the performance characteristics of application programs running under the Wind River VxWorks real-time operating system on a PPC750 computer. Perfmon generates a user-friendly interface and collects performance data by use of performance registers provided by the PPC750 architecture. It processes and presents run-time statistics on a per-task basis over a repeating time interval (typically, several seconds or minutes) specified by the user. When the Perfmon software module is loaded with the user s software modules, it is available for use through Perfmon commands, without any modification of the user s code and at negligible performance penalty. Per-task run-time performance data made available by Perfmon include percentage time, number of instructions executed per unit time, dispatch ratio, stack high water mark, and level-1 instruction and data cache miss rates. The performance data are written to a file specified by the user or to the serial port of the computer
Bredeweg, Steef W; Zijlstra, Sjouke; Buist, Ida
2010-09-01
Distance running is a popular recreational exercise. It is a beneficial activity for health and well being. However, running may also cause injuries, especially of the lower extremities. In literature there is no agreement what intrinsic and extrinsic factors cause running related injuries (RRIs). In theory, most RRIs are elicited by training errors, this too much, too soon. In a preconditioning program runners can adapt more gradually to the high mechanical loads of running and will be less susceptible to RRIs. In this study the effectiveness of a 4-week preconditioning program on the incidence of RRIs in novice runners prior to a training program will be studied. The GRONORUN 2 (Groningen Novice Running) study is a two arm randomized controlled trial studying the effect of a 4-week preconditioning (PRECON) program in a group of novice runners. All participants wanted to train for the recreational Groningen 4-Mile running event. The PRECON group started a 4-week preconditioning program with walking and hopping exercises 4 weeks before the start of the training program. The control (CON) and PRECON group started a frequently used 9-week training program in preparation for the Groningen 4-Mile running event.During the follow up period participants registered their running exposure, other sporting activities and running related injuries in an Internet based running log. The primary outcome measure was the number of RRIs. RRI was defined as a musculoskeletal ailment or complaint of the lower extremities or back causing a restriction on running for at least three training sessions. The GRONORUN 2 study will add important information to the existing running science. The concept of preconditioning is easy to implement in existing training programs and will hopefully prevent RRIs especially in novice runners. The Netherlands National Trial Register NTR1906. The NTR is part of the WHO Primary Registries.
uPy: a ubiquitous computer graphics Python API with Biological Modeling Applications
Autin, L.; Johnson, G.; Hake, J.; Olson, A.; Sanner, M.
2015-01-01
In this paper we describe uPy, an extension module for the Python programming language that provides a uniform abstraction of the APIs of several 3D computer graphics programs called hosts, including: Blender, Maya, Cinema4D, and DejaVu. A plugin written with uPy is a unique piece of code that will run in all uPy-supported hosts. We demonstrate the creation of complex plug-ins for molecular/cellular modeling and visualization and discuss how uPy can more generally simplify programming for many types of projects (not solely science applications) intended for multi-host distribution. uPy is available at http://upy.scripps.edu PMID:24806987
An Integrated Nurse Practitioner-Run Subspecialty Referral Program for Incontinent Children.
Jarczyk, Kimberly S; Pieper, Pam; Brodie, Lori; Ezzell, Kelly; D'Alessandro, Tina
Evidence suggests that urinary and fecal incontinence and abnormal voiding and defecation dynamics are different manifestations of the same syndrome. This article reports the success of an innovative program for care of children with incontinence and dysfunctional elimination. This program is innovative because it is the first to combine subspecialty services (urology, gastroenterology, and psychiatry) in a single point of care for this population and the first reported independent nurse practitioner-run specialty referral practice in a free-standing pediatric ambulatory subspecialty setting. Currently, services for affected children are siloed in the aforementioned subspecialties, fragmenting care. Retrospective data on financial, patient satisfaction, and patient referral base were compiled to assess this program. Analysis indicates that this model is fiscally sound, has similar or higher patient satisfaction scores when measured against physician-run subspecialty clinics, and has an extensive geographic referral base in the absence of marketing. This model has potential transformative significance: (a) the impact of children achieving continence cannot be underestimated, (b) configuration of services that cross traditional subspecialty boundaries may have broader application to other populations, and (c) demonstration of effectiveness of non-physician provider reconfiguration of health care delivery in subspecialty practice may extend to the care of other populations. Copyright © 2017 National Association of Pediatric Nurse Practitioners. Published by Elsevier Inc. All rights reserved.
Klumpner, Thomas T; Lange, Elizabeth M S; Ahmed, Heena S; Fitzgerald, Paul C; Wong, Cynthia A; Toledo, Paloma
2016-11-01
Programmed intermittent bolus injection of epidural anesthetic solution results in decreased anesthetic consumption and better patient satisfaction compared with continuous infusion, presumably by better spread of the anesthetic solution in the epidural space. It is not known whether the delivery speed of the bolus injection influences analgesia outcomes. The objective of this in vitro study was to determine the pressure generated by a programmed intermittent bolus pump at 4 infusion delivery speeds through open-ended, single-orifice and closed-end, multiorifice epidural catheters. In vitro observational study. Not applicable. Not applicable. A CADD-Solis Pain Management System v3.0 with Programmed Intermittent Bolus Model 2110 was connected via a 3-way adapter to an epidural catheter and a digital pressure transducer. Pressures generated by delivery speeds of 100, 175, 300, and 400 mL/h of saline solution were tested with 4 epidural catheters (2 single orifice and 2 multiorifice). These runs were replicated on 5 pumps. Analysis of variance was used to compare the mean peak pressures of each delivery speed within each catheter group (single orifice and multiorifice). Thirty runs at each delivery speed were performed with each type of catheter for a total of 240 experimental runs. Peak pressure increased with increasing delivery speeds in both catheter groups (P<.001). Peak pressures were higher with the multiorifice catheter compared with the single-orifice catheter at all delivery speeds (P<.001, for all). Using a pump designed for programmed intermittent infusion boluses, the delivery speed of saline solution through epidural catheters was directly related to the peak pressures. Future work should evaluate whether differences in the delivery speed of anesthetic solution into the epidural space correlate with differences in the duration and quality of analgesia during programmed intermittent epidural bolus delivery. Copyright © 2016 Elsevier Inc. All rights reserved.
MAX UnMix: A web application for unmixing magnetic coercivity distributions
NASA Astrophysics Data System (ADS)
Maxbauer, Daniel P.; Feinberg, Joshua M.; Fox, David L.
2016-10-01
It is common in the fields of rock and environmental magnetism to unmix magnetic mineral components using statistical methods that decompose various types of magnetization curves (e.g., acquisition, demagnetization, or backfield). A number of programs have been developed over the past decade that are frequently used by the rock magnetic community, however many of these programs are either outdated or have obstacles inhibiting their usability. MAX UnMix is a web application (available online at http://www.irm.umn.edu/maxunmix), built using the shiny package for R studio, that can be used for unmixing coercivity distributions derived from magnetization curves. Here, we describe in detail the statistical model underpinning the MAX UnMix web application and discuss the programs functionality. MAX UnMix is an improvement over previous unmixing programs in that it is designed to be user friendly, runs as an independent website, and is platform independent.
40 CFR 86.1438 - Test run-EPA.
Code of Federal Regulations, 2010 CFR
2010-07-01
... recall purposes. For recall program testing, in-use vehicles will be set to manufacturer's specifications... five seconds in any one excursion, except during the allowable engine-off periods. The total duration...-duty trucks. For recall testing, a pass or fail determination is made for each applicable test mode...
Code of Federal Regulations, 2010 CFR
2010-10-01
... the site-specific application programs, run timers, read inputs, drive outputs, perform self... validation process is to determine “whether the correct product was built.” Verification means the process of... established at the start of that phase. The goal of the verification process is to determine “whether the...
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2011 CFR
2011-01-01
... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...
10 CFR 2.1003 - Availability of material.
Code of Federal Regulations, 2012 CFR
2012-01-01
... months in advance of submitting its license application for a geologic repository, the NRC shall make... of privilege in § 2.1006, graphic-oriented documentary material that includes raw data, computer runs, computer programs and codes, field notes, laboratory notes, maps, diagrams and photographs, which have been...
Perl-speaks-NONMEM (PsN)--a Perl module for NONMEM related programming.
Lindbom, Lars; Ribbing, Jakob; Jonsson, E Niclas
2004-08-01
The NONMEM program is the most widely used nonlinear regression software in population pharmacokinetic/pharmacodynamic (PK/PD) analyses. In this article we describe a programming library, Perl-speaks-NONMEM (PsN), intended for programmers that aim at using the computational capability of NONMEM in external applications. The library is object oriented and written in the programming language Perl. The classes of the library are built around NONMEM's data, model and output files. The specification of the NONMEM model is easily set or changed through the model and data file classes while the output from a model fit is accessed through the output file class. The classes have methods that help the programmer perform common repetitive tasks, e.g. summarising the output from a NONMEM run, setting the initial estimates of a model based on a previous run or truncating values over a certain threshold in the data file. PsN creates a basis for the development of high-level software using NONMEM as the regression tool.
CheD: chemical database compilation tool, Internet server, and client for SQL servers.
Trepalin, S V; Yarkov, A V
2001-01-01
An efficient program, which runs on a personal computer, for the storage, retrieval, and processing of chemical information, is presented, The program can work both as a stand-alone application or in conjunction with a specifically written Web server application or with some standard SQL servers, e.g., Oracle, Interbase, and MS SQL. New types of data fields are introduced, e.g., arrays for spectral information storage, HTML and database links, and user-defined functions. CheD has an open architecture; thus, custom data types, controls, and services may be added. A WWW server application for chemical data retrieval features an easy and user-friendly installation on Windows NT or 95 platforms.
Is DOD on the Right Path to Financial Auditability?
2012-03-22
and DOD decision-making. Moreover, most of the 10 ERPs run the same software applications (i.e. SAP [Systems Applications and Programs] or PeopleSoft...Financial Readiness; GFEBS; DEAMS; Navy ERP 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF...Navy ERP CLASSIFICATION: Unclassified To make every dollar count, the Department of Defense (DOD) must be able to account for every dollar
Regulatory frameworks for mobile medical applications.
Censi, Federica; Mattei, Eugenio; Triventi, Michele; Calcagnini, Giovanni
2015-05-01
A mobile application (app) is a software program that runs on mobile communication devices such as a smartphone. The concept of a mobile medical app has gained popularity and diffusion but its reference regulatory context has raised discussion and concerns. Theoretically, a mobile app can be developed and uploaded easily by any person or entity. Thus, if an app can have some effects on the health of the users, it is mandatory to identify its reference regulatory context and the applicable prescriptions.
2010-01-01
interface, another providing the application logic (a program used to manipulate the data), and a server running Microsoft SQL Server or Oracle RDBMS... Oracle ) • Mysql (Open Source) • Other What application server software will be needed? • Application Server • CGI PHP/Perl (Open Source...are used throughout DoD and serve a variety of functions. While DoD has a codified and institutionalized process for the development of a common set
Scalable PGAS Metadata Management on Extreme Scale Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP
Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less
Programming distributed medical applications with XWCH2.
Ben Belgacem, Mohamed; Niinimaki, Marko; Abdennadher, Nabil
2010-01-01
Many medical applications utilise distributed/parallel computing in order to cope with demands of large data or computing power requirements. In this paper, we present a new version of the XtremWeb-CH (XWCH) platform, and demonstrate two medical applications that run on XWCH. The platform is versatile in a way that it supports direct communication between tasks. When tasks cannot communicate directly, warehouses are used as intermediary nodes between "producer" and "consumer" tasks. New features have been developed to provide improved support for writing powerfull distributed applications using an easy API.
ERIC Educational Resources Information Center
Emery, Jill
2010-01-01
In August 2010, "Wired" magazine declared, "The Web is Dead. Long Live the Internet." Citing the rise of IPad & Smartphone sales and the rapid explosion of application-based software to run various programs on multiple computing devices--but especially mobile computing devices--people spend more hours than ever connected to or "on" the Internet…
A Performance Support Tool for Cisco Training Program Managers
ERIC Educational Resources Information Center
Benson, Angela D.; Bothra, Jashoda; Sharma, Priya
2004-01-01
Performance support systems can play an important role in corporations by managing and allowing distribution of information more easily. These systems run the gamut from simple paper job aids to sophisticated computer- and web-based software applications that support the entire corporate supply chain. According to Gery (1991), a performance…
Stone, Brandon L; Heishman, Aaron D; Campbell, Jay A
2017-07-31
The purpose of this study was to compare the effects of an experimental versus traditional military run training on 2-mile run ability in Army Reserve Officer Training Corps (ROTC) cadets. Fifty college-aged cadets were randomly placed into two groups and trained for four weeks with either an experimental running program (EXP, n=22) comprised of RPE intensity-specific, energy system based intervals or with traditional military running program (TRA, n=28) utilizing a crossover study design. A 2-mile run assessment was performed just prior to the start, at the end of the first 4 weeks, and again after the second 4 weeks of training following crossover. The EXP program significantly decreased 2-mile run times (961.3s ± 155.8s to 943.4 ± 140.2s, P=0.012, baseline to post 1) while the TRA group experienced a significant increase in run times (901.0 ± 79.2s vs. 913.9 ± 82.9s) over the same training period. There was a moderate effect size (d = 0.61, P=0.07) for the experimental run program to "reverse" the adverse effects of the traditional program within the 4-week training period (post 1 to post 2) following treatment crossover. Thus, for short-term training of military personnel, RPE intensity specific running program comprised of aerobic and anaerobic system development can enhance 2-mile run performance superior of a traditional program while reducing training volume (60 min per session vs. 43.2 min per session, respectively). Future research should extend the training period to determine efficacy of this training approach for long term improvement of aerobic capacity and possible reduction of musculoskeletal injury.
LISP as an Environment for Software Design: Powerful and Perspicuous
Blum, Robert L.; Walker, Michael G.
1986-01-01
The LISP language provides a useful set of features for prototyping knowledge-intensive, clinical applications software that is not found In most other programing environments. Medical computer programs that need large medical knowledge bases, such as programs for diagnosis, therapeutic consultation, education, simulation, and peer review, are hard to design, evolve continually, and often require major revisions. They necessitate an efficient and flexible program development environment. The LISP language and programming environments bullt around it are well suited for program prototyping. The lingua franca of artifical intelligence researchers, LISP facllitates bullding complex systems because it is simple yet powerful. Because of its simplicity, LISP programs can read, execute, modify and even compose other LISP programs at run time. Hence, it has been easy for system developers to create programming tools that greatly speed the program development process, and that may be easily extended by users. This has resulted in the creation of many useful graphical interfaces, editors, and debuggers, which facllitate the development of knowledge-intensive medical applications.
NASA Technical Reports Server (NTRS)
Hardwick, Charles
1991-01-01
Field studies were conducted by MCC to determine areas of research of mutual interest to MCC and JSC. NASA personnel from the Information Systems Directorate and research faculty from UHCL/RICIS visited MCC in Austin, Texas to examine tools and applications under development in the MCC Software Technology Program. MCC personnel presented workshops in hypermedia, design knowledge capture, and design recovery on site at JSC for ISD personnel. The following programs were installed on workstations in the Software Technology Lab, NASA/JSC: (1) GERM (Graphic Entity Relations Modeler); (2) gIBIS (Graphic Issues Based Information System); and (3) DESIRE (Design Recovery tool). These applications were made available to NASA for inspection and evaluation. Programs developed in the MCC Software Technology Program run on the SUN workstation. The programs do not require special configuration, but they will require larger than usual amounts of disk space and RAM to operate properly.
Space activity and programs at SOFRADIR
NASA Astrophysics Data System (ADS)
Bouakka-Manesse, A.; Jamin, N.; Delannoy, A.; Fieque, B.; Leroy, C.; Pidancier, P.; Vial, L.; Chorier, P.; Péré-Laperne, N.
2016-09-01
SOFRADIR is one of the leading companies involved in the development and manufacturing of infrared detectors for space applications. As a matter of fact, SOFRADIR is involved in many space programs from visible up to VLWIR spectral ranges. These programs concern operational missions for earth imagery, meteorology and also scientific missions for universe exploration. One of the last space detectors available at SOFRADIR is a visible - SWIR detector named Next Generation Panchromatic Detector (NGP) which is well adapted for hyperspectral, imagery and spectroscopy applications. In parallel of this new space detector, numerous programs are currently running for different kind of missions: meteorology (MTG), Copernicus with the Sentinel detectors series, Metop-SG system (3MI), Mars exploration (Mamiss, etc.). In this paper, we present the last developments made for space activity and in particular the NGP detector. We will also present the space applications using this detector and show appropriateness of its use to answer space programs specifications, as for example those of Sentinel-5.
Space activity and programs at Sofradir
NASA Astrophysics Data System (ADS)
Bouakka-Manesse, A.; Jamin, N.; Delannoy, A.; Fièque, B.; Leroy, C.; Pidancier, P.; Vial, L.; Chorier, P.; Péré Laperne, N.
2016-10-01
SOFRADIR is one of the leading companies involved in the development and manufacturing of infrared detectors for space applications. As a matter of fact, SOFRADIR is involved in many space programs from visible up to VLWIR spectral ranges. These programs concern operational missions for earth imagery, meteorology and also scientific missions for universe exploration. One of the last space detectors available at SOFRADIR is a visible - SWIR detector named Next Generation Panchromatic Detector (NGP) which is well adapted for hyperspectral, imagery and spectroscopy applications. In parallel of this new space detector, numerous programs are currently running for different kind of missions: meteorology (MTG), Copernicus with the Sentinel detectors series, Metop-SG system (3MI), Mars exploration (Mamiss, etc….)… In this paper, we present the last developments made for space activity and in particular the NGP detector. We will also present the space applications using this detector and show appropriateness of its use to answer space programs specifications, as for example those of Sentinel-5.
VICAR - VIDEO IMAGE COMMUNICATION AND RETRIEVAL
NASA Technical Reports Server (NTRS)
Wall, R. J.
1994-01-01
VICAR (Video Image Communication and Retrieval) is a general purpose image processing software system that has been under continuous development since the late 1960's. Originally intended for data from the NASA Jet Propulsion Laboratory's unmanned planetary spacecraft, VICAR is now used for a variety of other applications including biomedical image processing, cartography, earth resources, and geological exploration. The development of this newest version of VICAR emphasized a standardized, easily-understood user interface, a shield between the user and the host operating system, and a comprehensive array of image processing capabilities. Structurally, VICAR can be divided into roughly two parts; a suite of applications programs and an executive which serves as the interfaces between the applications, the operating system, and the user. There are several hundred applications programs ranging in function from interactive image editing, data compression/decompression, and map projection, to blemish, noise, and artifact removal, mosaic generation, and pattern recognition and location. An information management system designed specifically for handling image related data can merge image data with other types of data files. The user accesses these programs through the VICAR executive, which consists of a supervisor and a run-time library. From the viewpoint of the user and the applications programs, the executive is an environment that is independent of the operating system. VICAR does not replace the host computer's operating system; instead, it overlays the host resources. The core of the executive is the VICAR Supervisor, which is based on NASA Goddard Space Flight Center's Transportable Applications Executive (TAE). Various modifications and extensions have been made to optimize TAE for image processing applications, resulting in a user friendly environment. The rest of the executive consists of the VICAR Run-Time Library, which provides a set of subroutines (image I/O, label I/O, parameter I/O, etc.) to facilitate image processing and provide the fastest I/O possible while maintaining a wide variety of capabilities. The run-time library also includes the Virtual Raster Display Interface (VRDI) which allows display oriented applications programs to be written for a variety of display devices using a set of common routines. (A display device can be any frame-buffer type device which is attached to the host computer and has memory planes for the display and manipulation of images. A display device may have any number of separate 8-bit image memory planes (IMPs), a graphics overlay plane, pseudo-color capabilities, hardware zoom and pan, and other features). The VRDI supports the following display devices: VICOM (Gould/Deanza) IP8500, RAMTEK RM-9465, ADAGE (Ikonas) IK3000 and the International Imaging Systems IVAS. VRDI's purpose is to provide a uniform operating environment not only for an application programmer, but for the user as well. The programmer is able to write programs without being concerned with the specifics of the device for which the application is intended. The VICAR Interactive Display Subsystem (VIDS) is a collection of utilities for easy interactive display and manipulation of images on a display device. VIDS has characteristics of both the executive and an application program, and offers a wide menu of image manipulation options. VIDS uses the VRDI to communicate with display devices. The first step in using VIDS to analyze and enhance an image (one simple example of VICAR's numerous capabilities) is to examine the histogram of the image. The histogram is a plot of frequency of occurrence for each pixel value (0 - 255) loaded in the image plane. If, for example, the histogram shows that there are no pixel values below 64 or above 192, the histogram can be "stretched" so that the value of 64 is mapped to zero and 192 is mapped to 255. Now the user can use the full dynamic range of the display device to display the data and better see its contents. Another example of a VIDS procedure is the JMOVIE command, which allows the user to run animations interactively on the display device. JMOVIE uses the concept of "frames", which are the individual frames which comprise the animation to be viewed. The user loads images into the frames after the size and number of frames has been selected. VICAR's source languages are primarily FORTRAN and C, with some VAX Assembler and array processor code. The VICAR run-time library is designed to work equally easily from either FORTRAN or C. The program was implemented on a DEC VAX series computer operating under VMS 4.7. The virtual memory required is 1.5MB. Approximately 180,000 blocks of storage are needed for the saveset. VICAR (version 2.3A/3G/13H) is a copyrighted work with all copyright vested in NASA and is available by license for a period of ten (10) years to approved licensees. This program was developed in 1989.
A Secure and Robust Approach to Software Tamper Resistance
NASA Astrophysics Data System (ADS)
Ghosh, Sudeep; Hiser, Jason D.; Davidson, Jack W.
Software tamper-resistance mechanisms have increasingly assumed significance as a technique to prevent unintended uses of software. Closely related to anti-tampering techniques are obfuscation techniques, which make code difficult to understand or analyze and therefore, challenging to modify meaningfully. This paper describes a secure and robust approach to software tamper resistance and obfuscation using process-level virtualization. The proposed techniques involve novel uses of software check summing guards and encryption to protect an application. In particular, a virtual machine (VM) is assembled with the application at software build time such that the application cannot run without the VM. The VM provides just-in-time decryption of the program and dynamism for the application's code. The application's code is used to protect the VM to ensure a level of circular protection. Finally, to prevent the attacker from obtaining an analyzable snapshot of the code, the VM periodically discards all decrypted code. We describe a prototype implementation of these techniques and evaluate the run-time performance of applications using our system. We also discuss how our system provides stronger protection against tampering attacks than previously described tamper-resistance approaches.
High-throughput sequence alignment using Graphics Processing Units
Schatz, Michael C; Trapnell, Cole; Delcher, Arthur L; Varshney, Amitabh
2007-01-01
Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs) in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA) from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU. PMID:18070356
1990-09-14
transmission of detected variations through sound lines of communication to centrally located standard Navy computers . These computers would be programmed to...have been programmed in C language. The program runs under the operating system ,OS9 on a VME-bus computer with a 68000 microprocessor. A number of full...present practice of"add-on" supervisory controls during ship design and construction,and "fix-it" R&D programs implemented after the ship isoperational
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
Considerations for initiating and progressing running programs in obese individuals.
Vincent, Heather K; Vincent, Kevin R
2013-06-01
Running has rapidly increased in popularity and elicits numerous health benefits, including weight loss. At present, no practical guidelines are available for obese persons who wish to start a running program. This article is a narrative review of the emerging evidence of the musculoskeletal factors to consider in obese patients who wish to initiate a running program and increase its intensity. Main program goals should include gradual weight loss, avoidance of injury, and enjoyment of the exercise. Pre-emptive strengthening exercises can improve the strength of the foot and ankle, hip abductor, quadriceps, and trunk to help support the joints bearing the loads before starting a running program. Depending on the presence of comorbid joint pain, nonimpact exercise or walking (on a flat surface, on an incline, and at high intensity) can be used to initiate the program. For progression to running, intensity or mileage increases should be slow and consistent to prevent musculoskeletal injury. A stepwise transition to running at a rate not exceeding 5%-10% of weekly mileage or duration is reasonable for this population. Intermittent walk-jog programs are also attractive for persons who are not able to sustain running for a long period. Musculoskeletal pain should neither carry over to the next day nor be increased the day after exercising. Rest days in between running sessions may help prevent overuse injury. Patients who have undergone bariatric surgery and are now lean can also run, but special foci such as hydration and energy replacement must be considered. In summary, obese persons can run for exercise, provided they follow conservative transitions and progression, schedule rest days, and heed onset of pain symptoms. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
The evolution of the ISOLDE control system
NASA Astrophysics Data System (ADS)
Jonsson, O. C.; Catherall, R.; Deloose, I.; Drumm, P.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Isolde Collaboration
The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows ™ through a Novell NetWare4 ™ local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.
The evolution of the ISOLDE control system
NASA Astrophysics Data System (ADS)
Jonsson, O. C.; Catherall, R.; Deloose, I.; Evensen, A. H. M.; Gase, K.; Focker, G. J.; Fowler, A.; Kugler, E.; Lettry, J.; Olesen, G.; Ravn, H. L.; Drumm, P.
1996-04-01
The ISOLDE on-line mass separator facility is operating on a Personal Computer based control system since spring 1992. Front End Computers accessing the hardware are controlled from consoles running Microsoft Windows® through a Novell NetWare4® local area network. The control system is transparently integrated in the CERN wide office network and makes heavy use of the CERN standard office application programs to control and to document the running of the ISOLDE isotope separators. This paper recalls the architecture of the control system, shows its recent developments and gives some examples of its graphical user interface.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
A mathematical model of a high performance airplane capable of vertical attitude takeoff and landing (VATOL) was developed. An off line digital simulation program incorporating this model was developed to provide trim conditions and dynamic check runs for the piloted simulation studies and support dynamic analyses of proposed VATOL configuration and flight control concepts. Development details for the various simulation component models and the application of the off line simulation program, Vertical Attitude Take-Off and Landing Simulation (VATLAS), to develop a baseline control system for the Vought SF-121 VATOL airplane concept are described.
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj
2016-04-01
Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state-of-the-art cloud geospatial collaboration platform. The presented solution is a prototype and can be used as a foundation for developing of any specialized cloud geospatial applications. Further research will be focused on distributing the cloud application on additional VMs, testing the scalability and availability of services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Anderson; David Culler; James Demmel
2000-02-16
The goal of the Castle project was to provide a parallel programming environment that enables the construction of high performance applications that run portably across many platforms. The authors approach was to design and implement a multilayered architecture, with higher levels building on lower ones to ensure portability, but with care taken not to introduce abstractions that sacrifice performance.
User's Guide for a Modular Flutter Analysis Software System (Fast Version 1.0)
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Bennett, R. M.
1978-01-01
The use and operation of a group of computer programs to perform a flutter analysis of a single planar wing are described. This system of programs is called FAST for Flutter Analysis System, and consists of five programs. Each program performs certain portions of a flutter analysis and can be run sequentially as a job step or individually. FAST uses natural vibration modes as input data and performs a conventional V-g type of solution. The unsteady aerodynamics programs in FAST are based on the subsonic kernel function lifting-surface theory although other aerodynamic programs can be used. Application of the programs is illustrated by a sample case of a complete flutter calculation that exercises each program.
NASA Technical Reports Server (NTRS)
Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.
1993-01-01
Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.
Experiences with Cray multi-tasking
NASA Technical Reports Server (NTRS)
Miya, E. N.
1985-01-01
The issues involved in modifying an existing code for multitasking is explored. They include Cray extensions to FORTRAN, an examination of the application code under study, designing workable modifications, specific code modifications to the VAX and Cray versions, performance, and efficiency results. The finished product is a faster, fully synchronous, parallel version of the original program. A production program is partitioned by hand to run on two CPUs. Loop splitting multitasks three key subroutines. Simply dividing subroutine data and control structure down the middle of a subroutine is not safe. Simple division produces results that are inconsistent with uniprocessor runs. The safest way to partition the code is to transfer one block of loops at a time and check the results of each on a test case. Other issues include debugging and performance. Task startup and maintenance (e.g., synchronization) are potentially expensive.
Operating system for a real-time multiprocessor propulsion system simulator
NASA Technical Reports Server (NTRS)
Cole, G. L.
1984-01-01
The success of the Real Time Multiprocessor Operating System (RTMPOS) in the development and evaluation of experimental hardware and software systems for real time interactive simulation of air breathing propulsion systems was evaluated. The Real Time Multiprocessor Operating System (RTMPOS) provides the user with a versatile, interactive means for loading, running, debugging and obtaining results from a multiprocessor based simulator. A front end processor (FEP) serves as the simulator controller and interface between the user and the simulator. These functions are facilitated by the RTMPOS which resides on the FEP. The RTMPOS acts in conjunction with the FEP's manufacturer supplied disk operating system that provides typical utilities like an assembler, linkage editor, text editor, file handling services, etc. Once a simulation is formulated, the RTMPOS provides for engineering level, run time operations such as loading, modifying and specifying computation flow of programs, simulator mode control, data handling and run time monitoring. Run time monitoring is a powerful feature of RTMPOS that allows the user to record all actions taken during a simulation session and to receive advisories from the simulator via the FEP. The RTMPOS is programmed mainly in PASCAL along with some assembly language routines. The RTMPOS software is easily modified to be applicable to hardware from different manufacturers.
The NLstart2run study: Incidence and risk factors of running-related injuries in novice runners.
Kluitenberg, B; van Middelkoop, M; Smits, D W; Verhagen, E; Hartgens, F; Diercks, R; van der Worp, H
2015-10-01
Running is a popular form of physical activity, despite of the high incidence of running-related injuries (RRIs). Because of methodological issues, the etiology of RRIs remains unclear. Therefore, the purposes of the study were to assess the incidence of RRIs and to identify risk factors for RRIs in a large group of novice runners. In total, 1696 runners of a 6-week supervised "Start to Run" program were included in the NLstart2run study. All participants were aged between 18 and 65, completed a baseline questionnaire that covered potential risk factors, and completed at least one running diary. RRIs were registered during the program with a weekly running log. An RRI was defined as a musculo-skeletal complaint of the lower extremity or back attributed to running and hampering running ability for three consecutive training sessions. During the running program, 10.9% of the runners sustained an RRI. The multivariable Cox regression analysis showed that a higher age, higher BMI, previous musculo-skeletal complaints not attributed to sports and no previous running experience were related to RRI. These findings indicate that many novice runners participating in a short-term running program suffer from RRIs. Therefore, the identified risk factors should be considered for screening and prevention purposes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Reliability techniques in the petroleum industry
NASA Technical Reports Server (NTRS)
Williams, H. L.
1971-01-01
Quantitative reliability evaluation methods used in the Apollo Spacecraft Program are translated into petroleum industry requirements with emphasis on offsetting reliability demonstration costs and limited production runs. Described are the qualitative disciplines applicable, the definitions and criteria that accompany the disciplines, and the generic application of these disciplines to the chemical industry. The disciplines are then translated into proposed definitions and criteria for the industry, into a base-line reliability plan that includes these disciplines, and into application notes to aid in adapting the base-line plan to a specific operation.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 14376-001] Cave Run Energy...: July 21, 2013. d. Submitted By: Cave Run Energy, LLC. e. Name of Project: Cave Run Hydroelectric...: 18 CFR 5.3 of the Commission's regulations. h. Potential Applicant Contact: Mark Boumansour, Cave Run...
Control of the TSU 2-m automatic telescope
NASA Astrophysics Data System (ADS)
Eaton, Joel A.; Williamson, Michael H.
2004-09-01
Tennessee State University is operating a 2-m automatic telescope for high-dispersion spectroscopy. The alt-azimuth telescope is fiber-coupled to a conventional echelle spectrograph with two resolutions (R=30,000 and 70,000). We control this instrument with four computers running linux and communicating over ethernet through the UDP protocol. A computer physically located on the telescope handles the acquisition and tracking of stars. We avoid the need for real-time programming in this application by periodically latching the positions of the axes in a commercial motion controller and the time in a GPS receiver. A second (spectrograph) computer sets up the spectrograph and runs its CCD, a third (roof) computer controls the roll-off roof and front flap of the telescope enclosure, and the fourth (executive) computer makes decisions about which stars to observe and when to close the observatory for bad weather. The only human intervention in the telescope's operation involves changing the observing program, copying data back to TSU, and running quality-control checks on the data. It has been running reliably in this completely automatic, unattended mode for more than a year with all day-to-day adminsitration carried out over the Internet. To support automatic operation, we have written a number of useful tools to predict and analyze what the telescope does. These include a simulator that predicts roughly how the telescope will operate on a given night, a quality-control program to parse logfiles from the telescope and identify problems, and a rescheduling program that calculates new priorities to keep the frequency of observation for the various stars roughly as desired. We have also set up a database to keep track of the tens of thousands of spectra we expect to get each year.
TWOS - TIME WARP OPERATING SYSTEM, VERSION 2.5.1
NASA Technical Reports Server (NTRS)
Bellenot, S. F.
1994-01-01
The Time Warp Operating System (TWOS) is a special-purpose operating system designed to support parallel discrete-event simulation. TWOS is a complete implementation of the Time Warp mechanism, a distributed protocol for virtual time synchronization based on process rollback and message annihilation. Version 2.5.1 supports simulations and other computations using both virtual time and dynamic load balancing; it does not support general time-sharing or multi-process jobs using conventional message synchronization and communication. The program utilizes the underlying operating system's resources. TWOS runs a single simulation at a time, executing it concurrently on as many processors of a distributed system as are allocated. The simulation needs only to be decomposed into objects (logical processes) that interact through time-stamped messages. TWOS provides transparent synchronization. The user does not have to add any more special logic to aid in synchronization, nor give any synchronization advice, nor even understand much about how the Time Warp mechanism works. The Time Warp Simulator (TWSIM) subdirectory contains a sequential simulation engine that is interface compatible with TWOS. This means that an application designer and programmer who wish to use TWOS can prototype code on TWSIM on a single processor and/or workstation before having to deal with the complexity of working on a distributed system. TWSIM also provides statistics about the application which may be helpful for determining the correctness of an application and for achieving good performance on TWOS. Version 2.5.1 has an updated interface that is not compatible with 2.0. The program's user manual assists the simulation programmer in the design, coding, and implementation of discrete-event simulations running on TWOS. The manual also includes a practical user's guide to the TWOS application benchmark, Colliding Pucks. TWOS supports simulations written in the C programming language. It is designed to run on the Sun3/Sun4 series computers and the BBN "Butterfly" GP-1000 computer. The standard distribution medium for this package is a .25 inch tape cartridge in TAR format. TWOS was developed in 1989 and updated in 1991. This program is a copyrighted work with all copyright vested in NASA. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
FORTRAN Programs for Aerodynamic Analyses on the Microvax/2000 CAD CAE Workstation
1988-09-01
file exists, you must compile the program by typing, FOR DUBLET [Returni The next step is to link the program by entering, LINK DUBLET [Return] The...files DUBLET.EXE and DUBLET.OBJ will now exist and you will be able to run the program. Running the Program To run the program, type DUBLET [Return...by entering 0.1 [Return] Now enter the number of intervals you desire the doublet distribution to have by enter- ing 10 [Return] The screen should now
A Worked Example of an Application of the Saint Simulation Program.
1987-09-01
tApprove foX pu bldc L-- -’ -:,=* J Approved for public release. This work is copyright. Apart from any fair dealing for the purpose of study, research...this network are 1. there are three types of incoming messages, 2. the rate of message generation is varied between two limits and is controlled by the...SAINT controlling program, and 3. the whole scenario is run for a fixed period of time itim limit). These features were included on the basis of
Preventing running injuries. Practical approach for family doctors.
Johnston, C. A. M.; Taunton, J. E.; Lloyd-Smith, D. R.; McKenzie, D. C.
2003-01-01
OBJECTIVE: To present a practical approach for preventing running injuries. QUALITY OF EVIDENCE: Much of the research on running injuries is in the form of expert opinion and comparison trials. Recent systematic reviews have summarized research in orthotics, stretching before running, and interventions to prevent soft tissue injuries. MAIN MESSAGE: The most common factors implicated in running injuries are errors in training methods, inappropriate training surfaces and running shoes, malalignment of the leg, and muscle weakness and inflexibility. Runners can reduce risk of injury by using established training programs that gradually increase distance or time of running and provide appropriate rest. Orthoses and heel lifts can correct malalignments of the leg. Running shoes appropriate for runners' foot types should be selected. Lower-extremity strength and flexibility programs should be added to training. Select appropriate surfaces for training and introduce changes gradually. CONCLUSION: Prevention addresses factors proven to cause running injuries. Unfortunately, injury is often the first sign of fault in running programs, so patients should be taught to recognize early symptoms of injury. PMID:14526862
Mentat/A: Medium grain parallel processing
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.
1992-01-01
The objective of this project is to test the Algorithm to Architecture Mapping Model (ATAMM) firing rules using the Mentat run-time system and the Mentat Programming Language (MPL). A special version of Mentat, Mentat/A (Mentat/ATAMM) was constructed. This required changes to: (1) modify the run-time system to control queue length and inhibit actor firing until required data tokens are available and space is available in the input queues of all of the direct descendent actors; (2) disallow the specification of persistent object classes in the MPL; and (3) permit only decision free graphs in the MPL. We were successful in implementing the spirit of the plan, although some goals changed as we came to better understand the problem. We report on what we accomplished and the lessons we learned. The Mentat/A run-time system is discussed, and we briefly present the compiler. We present results for three applications and conclude with a summary and some observations. Appendix A contains a list of technical reports and published papers partially supported by the grant. Appendix B contains listings for the three applications.
Real-Time Imaging with a Pulsed Coherent CO, Laser Radar
1997-01-01
30 joule) transmitted energy levels has just begun. The FLD program will conclude in 1997 with the demonstration of a full-up, real - time operating system . This...The master system and VMEbus controller is an off-the-shelf controller based on the Motorola 68040 processor running the VxWorks real time operating system . Application
BRSCW Reference Set Application: Joe Buechler - Biosite Inc (2009) — EDRN Public Portal
Over 40 marker assays are available to run on the samples. These include markers such as Osteopontin, Mesothelin, Periostin, Endoglin, intestinal Fatty Acid Binding Protein, and FAS-Ligand, some of which have been previously described in the literature. Other proprietary markers are derived from internal discovery efforts and from collaborator programs.
NASA Technical Reports Server (NTRS)
1979-01-01
The current program had the objective to modify a discrete vortex wake method to efficiently compute the aerodynamic forces and moments on high fineness ratio bodies (f approximately 10.0). The approach is to increase computational efficiency by structuring the program to take advantage of new computer vector software and by developing new algorithms when vector software can not efficiently be used. An efficient program was written and substantial savings achieved. Several test cases were run for fineness ratios up to f = 16.0 and angles of attack up to 50 degrees.
An Ada Linear-Algebra Software Package Modeled After HAL/S
NASA Technical Reports Server (NTRS)
Klumpp, Allan R.; Lawson, Charles L.
1990-01-01
New avionics software written more easily. Software package extends Ada programming language to include linear-algebra capabilities similar to those of HAL/S programming language. Designed for such avionics applications as Space Station flight software. In addition to built-in functions of HAL/S, package incorporates quaternion functions used in Space Shuttle and Galileo projects and routines from LINPAK solving systems of equations involving general square matrices. Contains two generic programs: one for floating-point computations and one for integer computations. Written on IBM/AT personal computer running under PC DOS, v.3.1.
Programming PHREEQC calculations with C++ and Python a comparative study
Charlton, Scott R.; Parkhurst, David L.; Muller, Mike
2011-01-01
The new IPhreeqc module provides an application programming interface (API) to facilitate coupling of other codes with the U.S. Geological Survey geochemical model PHREEQC. Traditionally, loose coupling of PHREEQC with other applications required methods to create PHREEQC input files, start external PHREEQC processes, and process PHREEQC output files. IPhreeqc eliminates most of this effort by providing direct access to PHREEQC capabilities through a component object model (COM), a library, or a dynamically linked library (DLL). Input and calculations can be specified through internally programmed strings, and all data exchange between an application and the module can occur in computer memory. This study compares simulations programmed in C++ and Python that are tightly coupled with IPhreeqc modules to the traditional simulations that are loosely coupled to PHREEQC. The study compares performance, quantifies effort, and evaluates lines of code and the complexity of the design. The comparisons show that IPhreeqc offers a more powerful and simpler approach for incorporating PHREEQC calculations into transport models and other applications that need to perform PHREEQC calculations. The IPhreeqc module facilitates the design of coupled applications and significantly reduces run times. Even a moderate knowledge of one of the supported programming languages allows more efficient use of PHREEQC than the traditional loosely coupled approach.
A description of the thruster attitude control simulation and its application to the HEAO-C study
NASA Technical Reports Server (NTRS)
Brandon, L. B.
1971-01-01
During the design and evaluation of a reaction control system (RCS), it is desirable to have a digital computer program simulating vehicle dynamics, disturbance torques, control torques, and RCS logic. The thruster attitude control simulation (TACS) is just such a computer program. The TACS is a relatively sophisticated digital computer program that includes all the major parameters involved in the attitude control of a vehicle using an RCS for control. It includes the effects of gravity gradient torques and HEAO-C aerodynamic torques so that realistic runs can be made in the areas of fuel consumption and engine actuation rates. Also, the program is general enough that any engine configuration and logic scheme can be implemented in a reasonable amount of time. The results of the application of the TACS in the HEAO-C study are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less
NASA Astrophysics Data System (ADS)
Horvat, Vladimir
2009-06-01
ERCS08 is a program for computing the atomic electron removal cross sections. It is written in FORTRAN in order to make it more portable and easier to customize by a large community of physicists, but it also comes with a separate windows graphics user interface control application ERCS08w that makes it easy to quickly prepare the input file, run the program, as well as view and analyze the output. The calculations are based on the ECPSSR theory for direct (Coulomb) ionization and non-radiative electron capture. With versatility in mind, the program allows for selective inclusion or exclusion of individual contributions to the cross sections from effects such as projectile energy loss, Coulomb deflection of the projectile, perturbation of electron's stationary state (polarization and binding), as well as relativity. This makes it straightforward to assess the importance of each effect in a given collision regime. The control application also makes it easy to setup for calculations in inverse kinematics (i.e. ionization of projectile ions by target atoms or ions). Program summaryProgram title: ERCS08 Catalogue identifier: AECU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 832 No. of bytes in distributed program, including test data, etc.: 318 420 Distribution format: tar.gz Programming language: Once the input file is prepared (using a text editor or ERCS08w), all the calculations are done in FORTRAN using double precision. Computer: see "Operating system" below Operating system: The main program (ERCS08) can run on any computer equipped with a FORTRAN compiler. Its pre-compiled executable file (supplied) runs under DOS or Windows. The supplied graphics user interface control application (ERCS08w) requires a Windows operating system. ERCS08w is designed to be used along with a text editor. Any editor can be used, including the one that comes with the operating system (for example, Edit for DOS or Notepad for Windows). Classification: 16.7, 16.8 Nature of problem: ECPSSR has become a typical tag word for a theory that goes beyond the standard plane wave Born approximation (PWBA) in order to predict the cross sections for direct (Coulomb) ionization of atomic electrons by projectile ions, taking into account the energy loss (E) and Coulomb deflection (C) of the projectile, as well as the perturbed stationary state (PSS) and relativistic nature (R) of the target electron. Its treatment of non-radiative electron capture to the projectile goes beyond the Oppenheimer-Brinkman-Kramers approximation (OBK) to include the effects of C, PSS, and R. PSS is described in terms of increased target electron binding (B) due to the presence of the projectile in the vicinity of the target nucleus, and (for direct ionization only) polarization of the target electron cloud (P) while projectile is outside the electron's shell radius. Several modifications of the theory have been recently suggested or endorsed by one of its authors (Lapicki). These modifications are sometimes explicit in the tag word (for example, eCPSSR, eCUSR, ReCPSShsR, etc.) A cross section for the ionization of a target electron is assumed to equal the sum of the cross sections for direct ionization (DI) and electron capture (EC). Solution method: The calculations are based on the ECPSSR theory for direct (Coulomb) ionization and non-radiative electron capture. With versatility in mind, the program allows for selective inclusion or exclusion of individual contributions to the cross sections from effects such as projectile energy loss, Coulomb deflection of the projectile, perturbation of electron's stationary state (polarization and binding), as well as relativity. This makes it straightforward to assess the importance of each effect in a given collision regime. The control application also makes it easy to setup for calculations in inverse kinematics (i.e. ionization of projectile ions by target atoms or ions). Restrictions: The program is restricted to the ionization of K, L, and M electrons. The theory is non-relativistic, which effectively limits its applicability to projectile energies up to about 50 MeV/amu. However, the theory is extended to apply to relativistic light projectiles. Radiative electron capture is not taken into account, since its contribution is found to be negligible in the collision regimes covered by the ECPSSR theory. Unusual features: Windows graphics user interface along with a FORTRAN code for calculations, selective inclusion or exclusion of specific corrections, inclusion of the extension to relativistic light projectiles, inclusion of non-radiative electron capture. Running time: Running the program using the input data provided with the distribution only takes a few seconds.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Helmhout, Pieter H; Diebal, Angela R; van der Kaaden, Lisanne; Harts, Chris C; Beutler, Anthony; Zimmermann, Wes O
2015-03-01
Previous studies have reported on the promising effects of changing running style in patients with chronic exertional compartment syndrome (CECS) using a 6-week training program aimed at adopting a forefoot strike technique. This study expands that work by comparing a 6-week in-house, center-based run training program with a less extensive, supervised, home-based run training program (50% home training). An alteration in running technique will lead to improvements in CECS complaints and running performance, with the less supervised program producing less dramatic results. Cohort study; Level of evidence, 3. Nineteen patients with CECS were prospectively enrolled. Postrunning intracompartmental pressure (ICP), run performance, and self-reported questionnaires were taken for all patients at baseline and after 6 weeks of running intervention. Questionnaires were also taken from 14 patients (7 center-based, 6 home-based) 4 months posttreatment. Significant improvement between preintervention and postintervention rates was found for running distance (43%), ICP values (36%), and scores on the questionnaires Single Assessment Numeric Evaluation (SANE; 36%), Lower Leg Outcome Survey (LLOS; 18%), and Patient Specific Complaints (PSC; 60%). The mean posttreatment score on the Global Rating of Change (GROC) was between +4 and +5 ("somewhat better" to "moderately better"). In 14 participants (74%), no elevation of pain was reported posttreatment, compared with 3 participants (16%) at baseline; in all these cases, the running test was aborted because of a lack of cardiorespiratory fitness. Self-reported scores continued to improve 4 months after the end of the intervention program, with mean improvement rates of 48% (SANE), 26% (LLOS), and 81% (PSC). The mean GROC score improved to +6 points ("a great deal better"). In 19 patients diagnosed with CECS, a 6-week forefoot running intervention performed in both a center-based and home-based training setting led to decreased postrunning lower leg ICP values, improved running performances, and self-assessed leg condition. The influence of training group was not statistically significant. Overall, this is a promising finding, taking into consideration the significantly reduced investments in time and resources needed for the home-based program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Qiang
At exascale, the challenge becomes to develop applications that run at scale and use exascale platforms reliably, efficiently, and flexibly. Workflows become much more complex because they must seamlessly integrate simulation and data analytics. They must include down-sampling, post-processing, feature extraction, and visualization. Power and data transfer limitations require these analysis tasks to be run in-situ or in-transit. We expect successful workflows will comprise multiple linked simulations along with tens of analysis routines. Users will have limited development time at scale and, therefore, must have rich tools to develop, debug, test, and deploy applications. At this scale, successful workflows willmore » compose linked computations from an assortment of reliable, well-defined computation elements, ones that can come and go as required, based on the needs of the workflow over time. We propose a novel framework that utilizes both virtual machines (VMs) and software containers to create a workflow system that establishes a uniform build and execution environment (BEE) beyond the capabilities of current systems. In this environment, applications will run reliably and repeatably across heterogeneous hardware and software. Containers, both commercial (Docker and Rocket) and open-source (LXC and LXD), define a runtime that isolates all software dependencies from the machine operating system. Workflows may contain multiple containers that run different operating systems, different software, and even different versions of the same software. We will run containers in open-source virtual machines (KVM) and emulators (QEMU) so that workflows run on any machine entirely in user-space. On this platform of containers and virtual machines, we will deliver workflow software that provides services, including repeatable execution, provenance, checkpointing, and future proofing. We will capture provenance about how containers were launched and how they interact to annotate workflows for repeatable and partial re-execution. We will coordinate the physical snapshots of virtual machines with parallel programming constructs, such as barriers, to automate checkpoint and restart. We will also integrate with HPC-specific container runtimes to gain access to accelerators and other specialized hardware to preserve native performance. Containers will link development to continuous integration. When application developers check code in, it will automatically be tested on a suite of different software and hardware architectures.« less
Extension of the PC version of VEPFIT with input and output routines running under Windows
NASA Astrophysics Data System (ADS)
Schut, H.; van Veen, A.
1995-01-01
The fitting program VEPFIT has been extended with applications running under the Microsoft-Windows environment facilitating the input and output of the VEPFIT fitting module. We have exploited the Microsoft-Windows graphical users interface by making use of dialog windows, scrollbars, command buttons, etc. The user communicates with the program simply by clicking and dragging with the mouse pointing device. Keyboard actions are limited to a minimum. Upon changing one or more input parameters the results of the modeling of the S-parameter and Ps fractions versus positron implantation energy are updated and displayed. This action can be considered as the first step in the fitting procedure upon which the user can decide to further adapt the input parameters or to forward these parameters as initial values to the fitting routine. The modeling step has proven to be helpful for designing positron beam experiments.
Erbes, Christopher R; Stinson, Rebecca; Kuhn, Eric; Polusny, Melissa; Urban, Jessica; Hoffman, Julia; Ruzek, Josef I; Stepnowsky, Carl; Thorp, Steven R
2014-11-01
Mobile health (mHealth) refers to the use of mobile technology (e.g., smartphones) and software (i.e., applications) to facilitate or enhance health care. Several mHealth programs act as either stand-alone aids for Veterans with post-traumatic stress disorder (PTSD) or adjuncts to conventional psychotherapy approaches. Veterans enrolled in a Veterans Affairs outpatient treatment program for PTSD (N = 188) completed anonymous questionnaires that assessed Veterans' access to mHealth-capable devices and their utilization of and interest in mHealth programs for PTSD. The majority of respondents (n = 142, 76%) reported having access to a cell phone or tablet capable of running applications, but only a small group (n = 18) reported use of existing mHealth programs for PTSD. Age significantly predicted ownership of mHealth devices, but not utilization or interest in mHealth applications among device owners. Around 56% to 76% of respondents with access indicated that they were interested in trying mHealth programs for such issues as anger management, sleep hygiene, and management of anxiety symptoms. Findings from this sample suggest that Veterans have adequate access to, and interest in, using mHealth applications to warrant continued development and evaluation of mobile applications for the treatment of PTSD and other mental health conditions. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
QRAP: A numerical code for projected (Q)uasiparticle (RA)ndom (P)hase approximation
NASA Astrophysics Data System (ADS)
Samana, A. R.; Krmpotić, F.; Bertulani, C. A.
2010-06-01
A computer code for quasiparticle random phase approximation - QRPA and projected quasiparticle random phase approximation - PQRPA models of nuclear structure is explained in details. The residual interaction is approximated by a simple δ-force. An important application of the code consists in evaluating nuclear matrix elements involved in neutrino-nucleus reactions. As an example, cross sections for 56Fe and 12C are calculated and the code output is explained. The application to other nuclei and the description of other nuclear and weak decay processes are also discussed. Program summaryTitle of program: QRAP ( Quasiparticle RAndom Phase approximation) Computers: The code has been created on a PC, but also runs on UNIX or LINUX machines Operating systems: WINDOWS or UNIX Program language used: Fortran-77 Memory required to execute with typical data: 16 Mbytes of RAM memory and 2 MB of hard disk space No. of lines in distributed program, including test data, etc.: ˜ 8000 No. of bytes in distributed program, including test data, etc.: ˜ 256 kB Distribution format: tar.gz Nature of physical problem: The program calculates neutrino- and antineutrino-nucleus cross sections as a function of the incident neutrino energy, and muon capture rates, using the QRPA or PQRPA as nuclear structure models. Method of solution: The QRPA, or PQRPA, equations are solved in a self-consistent way for even-even nuclei. The nuclear matrix elements for the neutrino-nucleus interaction are treated as the beta inverse reaction of odd-odd nuclei as function of the transfer momentum. Typical running time: ≈ 5 min on a 3 GHz processor for Data set 1.
Object-oriented Tools for Distributed Computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1993-01-01
Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.
Real-Time MENTAT programming language and architecture
NASA Technical Reports Server (NTRS)
Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.
1989-01-01
Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.
An Exploratory Examination of Families Engaged in a Children's Adventure Running Program
ERIC Educational Resources Information Center
Isnor, Heather; Dawson, Kimberley A.
2017-01-01
The purpose of this study was to qualitatively explore the experiences of families who participated in an adventure running program (ARP) in Canada. Adventure running is a unique sport that combines navigation and running in a forested setting. Six parents (four males, two females) and five children (two females, three males) were interviewed.…
A Buyer Behaviour Framework for the Development and Design of Software Agents in E-Commerce.
ERIC Educational Resources Information Center
Sproule, Susan; Archer, Norm
2000-01-01
Software agents are computer programs that run in the background and perform tasks autonomously as delegated by the user. This paper blends models from marketing research and findings from the field of decision support systems to build a framework for the design of software agents to support in e-commerce buying applications. (Contains 35…
User's guide to UGRS: the Ultimate Grading and Remanufacturing System (version 5.0).
John Moody; Charles J. Gatchell; Elizabeth S. Walker; Powsiri Klinkhachorn
1998-01-01
The Ultimate Grading and Remanufacturing System (UGRS) is the latest generation of advanced computer programs for lumber grading. It is designed to be a training and research tool that allows grading of lumber according to 1998 NHLA rules and remanufacturing for maximum dollar value. A 32-bit application that runs under all Microsoft Windows operating systems, UGRS...
DOE Office of Scientific and Technical Information (OSTI.GOV)
This software is a plug-in that interfaces between the Phoenix Integration's Model Center and the Base SAS 9.2 applications. The end use of the plug-in is to link input and output data that resides in SAS tables or MS SQL to and from "legacy" software programs without recoding. The potential end users are users who need to run legacy code and want data stored in a SQL database.
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2010-03-01
Scientific computing is the field of study concerned with constructing mathematical models, numerical solution techniques and with using computers to analyse and solve scientific and engineering problems. Model-Driven Development (MDD) has been proposed as a means to support the software development process through the use of a model-centric approach. This paper surveys the core MDD technology that was used to develop an application that allows computation of the RHEED intensities dynamically for a disordered surface. New version program summaryProgram title: RHEED1DProcess Catalogue identifier: ADUY_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 31 971 No. of bytes in distributed program, including test data, etc.: 3 039 820 Distribution format: tar.gz Programming language: Embarcadero C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 GB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADUY_v3_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2394 Does the new version supersede the previous version?: No Nature of problem: An application that implements numerical simulations should be constructed according to the CSFAR rules: clear and well-documented, simple, fast, accurate, and robust. A clearly written, externally and internally documented program is much easier to understand and modify. A simple program is much less prone to error and is more easily modified than one that is complicated. Simplicity and clarity also help make the program flexible. Making the program fast has economic benefits. It also allows flexibility because some of the features that make a program efficient can be traded off for greater accuracy. Making the program fast also has the benefit of allowing longer calculations with better resolution. The compromise between speed and accuracy has always posted one of the most troublesome challenges for the programmer. Almost all advances in numerical analysis have come about trying to reach these twin goals. Change in the basic algorithms will give greater improvements in accuracy and speed than using special numerical tricks or changing programming language. A robust program works correctly over a broad spectrum of input data. Solution method: The computational model of the program is based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. In the case of a disordered surface we can use the proportional model of the scattering potential, in which the potential of a partially filled layer is taken to be the product of the coverage of this layer and the potential of a fully filled layer: U(θ,z)=∑ θ(t/τ)U(1,z), where U(1,z) stands for the potential for the full nth layer, and U(θ,z) the potential of the growing layer. Reasons for new version: Responding to the user feedback the RHEEDGr_09 program has been upgraded to a standard that allows carrying out computations of the RHEED intensities for a disordered surface. Also, functionality and documentation of the program have been improved. Summary of revisions:The logical structure of the Platform-Specific Model of the RHEEDGr_09 program has been modified according to the scheme showed in Fig. 1*. The class diagram in Fig. 1* is a static view of the main platform-specific elements of the RHEED1DProcess architecture. Fig. 2* provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process. Fig. 3* shows the RHEED1DProcess use case model. As can be seen in Figs. 2-3* the RHEED1DProcess has been designed as a slave process that runs as a separate thread inside each transaction generated by the master Growth09 program (see pii:S0010-4655(09)00386-5 A. Daniluk, Model-Driven Development for scientific computing. Computations of RHEED intensities for a disordered surface. Part II The RHEED1DProcess requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions on the preparation of the .ini files can be found in the new distribution. The RHEED1DProcess requires the user to provide the appropriate values of the layers of coverage profiles. The CoverageProfiles.dat file (generated by Growth09 master application) at run-time loads these values. The RHEED1DProcess enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. * The figures mentioned can be downloaded, see "Supplementary material" below. Unusual features: The program is distributed in the form of main projects RHEED1DProcess.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Embarcadero RAD Studio 2010 along with Together visual-modelling platform. The program should be compiled with English/USA regional and language options. Additional comments: This version of the RHEED program is designed to run in conjunction with the GROWTH09 (ADVL_v3_0) program. It does not replace the previous, stand alone, RHEEDGR-09 (ADUY_v3_0) version. Running time: The typical running time is machine and user-parameters dependent. References:[1] OMG, Model Driven Architecture Guide Version 1.0.1, 2003.
Guidelines for developing vectorizable computer programs
NASA Technical Reports Server (NTRS)
Miner, E. W.
1982-01-01
Some fundamental principles for developing computer programs which are compatible with array-oriented computers are presented. The emphasis is on basic techniques for structuring computer codes which are applicable in FORTRAN and do not require a special programming language or exact a significant penalty on a scalar computer. Researchers who are using numerical techniques to solve problems in engineering can apply these basic principles and thus develop transportable computer programs (in FORTRAN) which contain much vectorizable code. The vector architecture of the ASC is discussed so that the requirements of array processing can be better appreciated. The "vectorization" of a finite-difference viscous shock-layer code is used as an example to illustrate the benefits and some of the difficulties involved. Increases in computing speed with vectorization are illustrated with results from the viscous shock-layer code and from a finite-element shock tube code. The applicability of these principles was substantiated through running programs on other computers with array-associated computing characteristics, such as the Hewlett-Packard (H-P) 1000-F.
LTCP 2D Graphical User Interface. Application Description and User's Guide
NASA Technical Reports Server (NTRS)
Ball, Robert; Navaz, Homayun K.
1996-01-01
A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.
VIRTUAL FRAME BUFFER INTERFACE
NASA Technical Reports Server (NTRS)
Wolfe, T. L.
1994-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied user interfaces. This variety of architectures and interfaces creates software development, maintenance, and portability problems for application programs. The Virtual Frame Buffer Interface program makes all frame buffers appear as a generic frame buffer with a specified set of characteristics, allowing programmers to write code which will run unmodified on all supported hardware. The Virtual Frame Buffer Interface converts generic commands to actual device commands. The virtual frame buffer consists of a definition of capabilities and FORTRAN subroutines that are called by application programs. The virtual frame buffer routines may be treated as subroutines, logical functions, or integer functions by the application program. Routines are included that allocate and manage hardware resources such as frame buffers, monitors, video switches, trackballs, tablets and joysticks; access image memory planes; and perform alphanumeric font or text generation. The subroutines for the various "real" frame buffers are in separate VAX/VMS shared libraries allowing modification, correction or enhancement of the virtual interface without affecting application programs. The Virtual Frame Buffer Interface program was developed in FORTRAN 77 for a DEC VAX 11/780 or a DEC VAX 11/750 under VMS 4.X. It supports ADAGE IK3000, DEANZA IP8500, Low Resolution RAMTEK 9460, and High Resolution RAMTEK 9460 Frame Buffers. It has a central memory requirement of approximately 150K. This program was developed in 1985.
Virtual network computing: cross-platform remote display and collaboration software.
Konerding, D E
1999-04-01
VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.
A Tool to Simulate the Transmission, Reception, and Execution of Interactive TV Applications
Kulesza, Raoni; Rodrigues, Thiago; Machado, Felipe A. L.; Santos, Celso A. S.
2017-01-01
The emergence of Interactive Digital Television (iDTV) opened a set of technological possibilities that go beyond those offered by conventional TV. Among these opportunities we can highlight interactive contents that run together with linear TV program (television service where the viewer has to watch a scheduled TV program at the particular time it is offered and on the particular channel it is presented on). However, developing interactive contents for this new platform is not as straightforward as, for example, developing Internet applications. One of the options to make this development process easier and safer is to use an iDTV simulator. However, after having investigated some of the existing iDTV simulation environments, we have found a limitation: these simulators mainly present solutions focused on the TV receiver, whose interactive content must be loaded in advance by the programmer to a local repository (e.g., Hard Drive, USB). Therefore, in this paper, we propose a tool, named BiS (Broadcast iDTV content Simulator), which makes possible a broader solution for the simulation of interactive contents. It allows simulating the transmission of interactive content along with the linear TV program (simulating the transmission of content over the air and in broadcast to the receivers). To enable this, we defined a generic and easy-to-customize communication protocol that was implemented in the tool. The proposed environment differs from others because it allows simulating reception of both linear content and interactive content while running Java applications to allow such a content presentation. PMID:28280770
Brunet, Jennifer; Saunders, Stephanie; Gifford, Wendy; Thomas, Roanne; Hamilton, Ryan
2018-05-01
To generate insights into the personal meaning and value of a running/walking program for women after a diagnosis of breast cancer. After completing a 12-week running/walking program with a 5-km training goal, eight women were interviewed and seven participated in a focus group. The interviews and focus group were audio-recorded and transcribed verbatim. Data were thematically analyzed. Data portrayed the personal benefits and value of the clinic. Four themes were identified: (1) receiving practical information and addressing targeted concerns, (2) pushing personal limits, (3) enabling a committed mindset, and (4) seeing benefits and challenges of running/walking with a group. Findings provide initial understanding of how women experience a running/walking program after a diagnosis of breast cancer and what they find to be important about their experiences. The range of positive benefits experienced by women suggests a running/walking program can help fill a gap in care for women diagnosed with breast cancer, and thus be part of cancer rehabilitation. However, because some women felt isolated at times, future research should seek to examine how running/walking programs can be modified and tailored so that all women find it socially beneficial. Implications for Rehabilitation The diagnosis and treatment of breast cancer can result in side effects and increase the risk of long-term disability. Physical activity can help women manage the side effects and lessen the risk of long-term disability. In a relatively small sample, this study shows that participation in a running/walking program can be an important part of breast cancer recovery.
Segmentation, dynamic storage, and variable loading on CDC equipment
NASA Technical Reports Server (NTRS)
Tiffany, S. H.
1980-01-01
Techniques for varying the segmented load structure of a program and for varying the dynamic storage allocation, depending upon whether a batch type or interactive type run is desired, are explained and demonstrated. All changes are based on a single data input to the program. The techniques involve: code within the program to suppress scratch pad input/output (I/O) for a batch run or translate the in-core data storage area from blank common to the end-of-code+1 address of a particular segment for an interactive run; automatic editing of the segload directives prior to loading, based upon data input to the program, to vary the structure of the load for interactive and batch runs; and automatic editing of the load map to determine the initial addresses for in core data storage for an interactive run.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Timothy J.
2016-03-01
While benchmarking software is useful for testing the performance limits and stability of Argonne National Laboratory’s new Theta supercomputer, there is no substitute for running real applications to explore the system’s potential. The Argonne Leadership Computing Facility’s Theta Early Science Program, modeled after its highly successful code migration program for the Mira supercomputer, has one primary aim: to deliver science on day one. Here is a closer look at the type of science problems that will be getting early access to Theta, a next-generation machine being rolled out this year.
DSP code optimization based on cache
NASA Astrophysics Data System (ADS)
Xu, Chengfa; Li, Chengcheng; Tang, Bin
2013-03-01
DSP program's running efficiency on board is often lower than which via the software simulation during the program development, which is mainly resulted from the user's improper use and incomplete understanding of the cache-based memory. This paper took the TI TMS320C6455 DSP as an example, analyzed its two-level internal cache, and summarized the methods of code optimization. Processor can achieve its best performance when using these code optimization methods. At last, a specific algorithm application in radar signal processing is proposed. Experiment result shows that these optimization are efficient.
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner. PMID:25077800
Pongor, Lőrinc S; Vera, Roberto; Ligeti, Balázs
2014-01-01
Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner.
Program Helps Generate And Manage Graphics
NASA Technical Reports Server (NTRS)
Truong, L. V.
1994-01-01
Living Color Frame Maker (LCFM) computer program generates computer-graphics frames. Graphical frames saved as text files, in readable and disclosed format, easily retrieved and manipulated by user programs for wide range of real-time visual information applications. LCFM implemented in frame-based expert system for visual aids in management of systems. Monitoring, diagnosis, and/or control, diagrams of circuits or systems brought to "life" by use of designated video colors and intensities to symbolize status of hardware components (via real-time feedback from sensors). Status of systems can be displayed. Written in C++ using Borland C++ 2.0 compiler for IBM PC-series computers and compatible computers running MS-DOS.
LinAir: A multi-element discrete vortex Weissinger aerodynamic prediction method
NASA Technical Reports Server (NTRS)
Durston, Donald A.
1993-01-01
LinAir is a vortex lattice aerodynamic prediction method similar to Weissinger's extended lifting-line theory, except that the circulation around a wing is represented by discrete horseshoe vortices, not a continuous distribution of vorticity. The program calculates subsonic longitudinal and lateral/directional aerodynamic forces and moments for arbitrary aircraft geometries. It was originally written by Dr. Ilan Kroo of Stanford University, and subsequently modified by the author to simplify modeling of complex configurations. The Polhamus leading-edge suction analogy was added by the author to extend the range of applicability of LinAir to low aspect ratio (i.e., fighter-type) configurations. A brief discussion of the theory of LinAir is presented, and details on how to run the program are given along with some comparisons with experimental data to validate the code. Example input and output files are given in the appendices to aid in understanding the program and its use. This version of LinAir runs in the VAX/VMS, Cray UNICOS, and Silicon Graphics Iris workstation environments at the time of this writing.
Conducting Simulation Studies in the R Programming Environment.
Hallgren, Kevin A
2013-10-12
Simulation studies allow researchers to answer specific questions about data analysis, statistical power, and best-practices for obtaining accurate results in empirical research. Despite the benefits that simulation research can provide, many researchers are unfamiliar with available tools for conducting their own simulation studies. The use of simulation studies need not be restricted to researchers with advanced skills in statistics and computer programming, and such methods can be implemented by researchers with a variety of abilities and interests. The present paper provides an introduction to methods used for running simulation studies using the R statistical programming environment and is written for individuals with minimal experience running simulation studies or using R. The paper describes the rationale and benefits of using simulations and introduces R functions relevant for many simulation studies. Three examples illustrate different applications for simulation studies, including (a) the use of simulations to answer a novel question about statistical analysis, (b) the use of simulations to estimate statistical power, and (c) the use of simulations to obtain confidence intervals of parameter estimates through bootstrapping. Results and fully annotated syntax from these examples are provided.
BioTapestry now provides a web application and improved drawing and layout tools
Paquette, Suzanne M.; Leinonen, Kalle; Longabaugh, William J.R.
2016-01-01
Gene regulatory networks (GRNs) control embryonic development, and to understand this process in depth, researchers need to have a detailed understanding of both the network architecture and its dynamic evolution over time and space. Interactive visualization tools better enable researchers to conceptualize, understand, and share GRN models. BioTapestry is an established application designed to fill this role, and recent enhancements released in Versions 6 and 7 have targeted two major facets of the program. First, we introduced significant improvements for network drawing and automatic layout that have now made it much easier for the user to create larger, more organized network drawings. Second, we revised the program architecture so it could continue to support the current Java desktop Editor program, while introducing a new BioTapestry GRN Viewer that runs as a JavaScript web application in a browser. We have deployed a number of GRN models using this new web application. These improvements will ensure that BioTapestry remains viable as a research tool in the face of the continuing evolution of web technologies, and as our understanding of GRN models grows. PMID:27134726
BioTapestry now provides a web application and improved drawing and layout tools.
Paquette, Suzanne M; Leinonen, Kalle; Longabaugh, William J R
2016-01-01
Gene regulatory networks (GRNs) control embryonic development, and to understand this process in depth, researchers need to have a detailed understanding of both the network architecture and its dynamic evolution over time and space. Interactive visualization tools better enable researchers to conceptualize, understand, and share GRN models. BioTapestry is an established application designed to fill this role, and recent enhancements released in Versions 6 and 7 have targeted two major facets of the program. First, we introduced significant improvements for network drawing and automatic layout that have now made it much easier for the user to create larger, more organized network drawings. Second, we revised the program architecture so it could continue to support the current Java desktop Editor program, while introducing a new BioTapestry GRN Viewer that runs as a JavaScript web application in a browser. We have deployed a number of GRN models using this new web application. These improvements will ensure that BioTapestry remains viable as a research tool in the face of the continuing evolution of web technologies, and as our understanding of GRN models grows.
Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution
NASA Astrophysics Data System (ADS)
Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin
2018-04-01
The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.
Oak Ridge Institutional Cluster Autotune Test Drive Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jibonananda, Sanyal; New, Joshua Ryan
2014-02-01
The Oak Ridge Institutional Cluster (OIC) provides general purpose computational resources for the ORNL staff to run computation heavy jobs that are larger than desktop applications but do not quite require the scale and power of the Oak Ridge Leadership Computing Facility (OLCF). This report details the efforts made and conclusions derived in performing a short test drive of the cluster resources on Phase 5 of the OIC. EnergyPlus was used in the analysis as a candidate user program and the overall software environment was evaluated against anticipated challenges experienced with resources such as the shared memory-Nautilus (JICS) and Titanmore » (OLCF). The OIC performed within reason and was found to be acceptable in the context of running EnergyPlus simulations. The number of cores per node and the availability of scratch space per node allow non-traditional desktop focused applications to leverage parallel ensemble execution. Although only individual runs of EnergyPlus were executed, the software environment on the OIC appeared suitable to run ensemble simulations with some modifications to the Autotune workflow. From a standpoint of general usability, the system supports common Linux libraries, compilers, standard job scheduling software (Torque/Moab), and the OpenMPI library (the only MPI library) for MPI communications. The file system is a Panasas file system which literature indicates to be an efficient file system.« less
Injuries in women associated with a periodized strength training and running program.
Reynolds, K L; Harman, E A; Worsham, R E; Sykes, M B; Frykman, P N; Backus, V L
2001-02-01
Forty-five women participated in a 24-week physical training program designed to improve lifting, load carriage, and running performance. Activities included weightlifting, running, backpacking, lift and carry drills, and sprint running. Physicians documented by passive surveillance all training-related injuries. Thirty-two women successfully completed training program. Twenty-two women (48.9%) suffered least 1 injury during training, but only 2 women had to drop out of the study because of injuries. The rate of injury associated with lost training time was 2.8 injuries per 1,000 training hours of exposure. Total clinic visits and days lost from training were 89 and 69, respectively. Most injuries were the overuse type involving the lower back, knees, and feet. Weightlifting accounted for a majority of the lost training days. A combined strength training and running program resulted in significant performance gains in women. Only 2 out of 45 participants left the training program cause of injuries.
MAP3D: a media processor approach for high-end 3D graphics
NASA Astrophysics Data System (ADS)
Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris
1999-12-01
Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-25
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 13880-000] Cuffs Run Pumped..., Motions To Intervene, and Competing Applications On November 18, 2010, Cuffs Run Pumped Storage, LLC filed... to study the feasibility of the Cuffs Run Pumped Storage Project, located on Cuffs Run and the...
Scalable computing for evolutionary genomics.
Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert
2012-01-01
Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landwehr, Joshua B.; Suetterlein, Joshua D.; Marquez, Andres
2016-05-16
Since 2012, the U.S. Department of Energy’s X-Stack program has been developing solutions including runtime systems, programming models, languages, compilers, and tools for the Exascale system software to address crucial performance and power requirements. Fine grain programming models and runtime systems show a great potential to efficiently utilize the underlying hardware. Thus, they are essential to many X-Stack efforts. An abundant amount of small tasks can better utilize the vast parallelism available on current and future machines. Moreover, finer tasks can recover faster and adapt better, due to a decrease in state and control. Nevertheless, current applications have been writtenmore » to exploit old paradigms (such as Communicating Sequential Processor and Bulk Synchronous Parallel processing). To fully utilize the advantages of these new systems, applications need to be adapted to these new paradigms. As part of the applications’ porting process, in-depth characterization studies, focused on both application characteristics and runtime features, need to take place to fully understand the application performance bottlenecks and how to resolve them. This paper presents a characterization study for a novel high performance runtime system, called the Open Community Runtime, using key HPC kernels as its vehicle. This study has the following contributions: one of the first high performance, fine grain, distributed memory runtime system implementing the OCR standard (version 0.99a); and a characterization study of key HPC kernels in terms of runtime primitives running on both intra and inter node environments. Running on a general purpose cluster, we have found up to 1635x relative speed-up for a parallel tiled Cholesky Kernels on 128 nodes with 16 cores each and a 1864x relative speed-up for a parallel tiled Smith-Waterman kernel on 128 nodes with 30 cores.« less
Level-2 Milestone 6007: Sierra Early Delivery System Deployed to Secret Restricted Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertsch, A. D.
This report documents the delivery and installation of Shark, a CORAL Sierra early delivery system deployed on the LLNL SRD network. Early ASC program users have run codes on the machine in support of application porting for the final Sierra system which will be deployed at LLNL in CY2018. In addition to the SRD resource, Shark, unclassified resources, Rzmanta and Ray, have been deployed on the LLNL Restricted Zone and Collaboration Zone networks in support of application readiness for the Sierra platform.
John Yarie
1983-01-01
The forest vegetation of 3,600,000 hectares in northeast interior Alaska was classified. A total of 365 plots located in a stratified random design were run through the ordination programs SIMORD and TWINSPAN. A total of 40 forest communities were described vegetatively and, to a limited extent, environmentally. The area covered by each community was similar, ranging...
Dynamic Test Generation for Large Binary Programs
2009-11-12
the fuzzing@whitestar.linuxbox.orgmailing list, including Jared DeMott, Disco Jonny, and Ari Takanen, for discussions on fuzzing tradeoffs. Martin...as is the case for large applications where exercising all execution paths is virtually hopeless anyway. This point will be further discussed in...consumes trace files generated by iDNA and virtually re-executes the recorded runs. TruScan offers several features that substantially simplify symbolic
Mobility for GCSS-MC through virtual PCs
2017-06-01
their productivity. Mobile device access to GCSS-MC would allow Marines to access a required program for their mission using a form of computing ...network throughput applications with a device running on various operating systems with limited computational ability. The use of VPCs leads to a...reduced need for network throughput and faster overall execution. 14. SUBJECT TERMS GCSS-MC, enterprise resource planning, virtual personal computer
ERIC Educational Resources Information Center
Dalrymple, George F.
Described is the BRAILLEMBOSS, a braille page printer, which is useful as a short run braille producer and as an employment and education tool for the blind and deaf blind. Examples of applications are given, including its use by computer programers, students, taxpayer service representatives, and news broadcasters. The machine is, for blind…
2014-04-25
EA’s Java application programming interface (API), the team built a tool called OWL2EA that can ingest an OWL file and generate the corresponding UML...ObjectItemStructure specification shown in Figure 10. Running this script in the relational database server MySQL creates the physical schema that
Selecting a Benchmark Suite to Profile High-Performance Computing (HPC) Machines
2014-11-01
architectures. Machines now contain central processing units (CPUs), graphics processing units (GPUs), and many integrated core ( MIC ) architecture all...evaluate the feasibility and applicability of a new architecture just released to the market . Researchers are often unsure how available resources will...architectures. Having a suite of programs running on different architectures, such as GPUs, MICs , and CPUs, adds complexity and technical challenges
A Monte-Carlo maplet for the study of the optical properties of biological tissues
NASA Astrophysics Data System (ADS)
Yip, Man Ho; Carvalho, M. J.
2007-12-01
Monte-Carlo simulations are commonly used to study complex physical processes in various fields of physics. In this paper we present a Maple program intended for Monte-Carlo simulations of photon transport in biological tissues. The program has been designed so that the input data and output display can be handled by a maplet (an easy and user-friendly graphical interface), named the MonteCarloMaplet. A thorough explanation of the programming steps and how to use the maplet is given. Results obtained with the Maple program are compared with corresponding results available in the literature. Program summaryProgram title:MonteCarloMaplet Catalogue identifier:ADZU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:3251 No. of bytes in distributed program, including test data, etc.:296 465 Distribution format: tar.gz Programming language:Maple 10 Computer: Acer Aspire 5610 (any running Maple 10) Operating system: Windows XP professional (any running Maple 10) Classification: 3.1, 5 Nature of problem: Simulate the transport of radiation in biological tissues. Solution method: The Maple program follows the steps of the C program of L. Wang et al. [L. Wang, S.L. Jacques, L. Zheng, Computer Methods and Programs in Biomedicine 47 (1995) 131-146]; The Maple library routine for random number generation is used [Maple 10 User Manual c Maplesoft, a division of Waterloo Maple Inc., 2005]. Restrictions: Running time increases rapidly with the number of photons used in the simulation. Unusual features: A maplet (graphical user interface) has been programmed for data input and output. Note that the Monte-Carlo simulation was programmed with Maple 10. If attempting to run the simulation with an earlier version of Maple, appropriate modifications (regarding typesetting fonts) are required and once effected the worksheet runs without problem. However some of the windows of the maplet may still appear distorted. Running time: Depends essentially on the number of photons used in the simulation. Elapsed times for particular runs are reported in the main text.
Parallelization and checkpointing of GPU applications through program transformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solano-Quinde, Lizandro Damian
2012-01-01
GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have benefited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solvemore » the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism and to develop support for application-level fault tolerance in applications using multiple GPUs. Our techniques reduce the burden of enhancing single-GPU applications to support these features. To achieve our goal, this work designs and implements a framework for enhancing a single-GPU OpenCL application through application transformation.« less
Parallel Signal Processing and System Simulation using aCe
NASA Technical Reports Server (NTRS)
Dorband, John E.; Aburdene, Maurice F.
2003-01-01
Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).
David, Fabrice P A; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch.
HTSstation: A Web Application and Open-Access Libraries for High-Throughput Sequencing Data Analysis
David, Fabrice P. A.; Delafontaine, Julien; Carat, Solenne; Ross, Frederick J.; Lefebvre, Gregory; Jarosz, Yohan; Sinclair, Lucas; Noordermeer, Daan; Rougemont, Jacques; Leleu, Marion
2014-01-01
The HTSstation analysis portal is a suite of simple web forms coupled to modular analysis pipelines for various applications of High-Throughput Sequencing including ChIP-seq, RNA-seq, 4C-seq and re-sequencing. HTSstation offers biologists the possibility to rapidly investigate their HTS data using an intuitive web application with heuristically pre-defined parameters. A number of open-source software components have been implemented and can be used to build, configure and run HTS analysis pipelines reactively. Besides, our programming framework empowers developers with the possibility to design their own workflows and integrate additional third-party software. The HTSstation web application is accessible at http://htsstation.epfl.ch. PMID:24475057
Approaches in highly parameterized inversion - GENIE, a general model-independent TCP/IP run manager
Muffels, Christopher T.; Schreuder, Willem A.; Doherty, John E.; Karanovic, Marinko; Tonkin, Matthew J.; Hunt, Randall J.; Welter, David E.
2012-01-01
GENIE is a model-independent suite of programs that can be used to generally distribute, manage, and execute multiple model runs via the TCP/IP infrastructure. The suite consists of a file distribution interface, a run manage, a run executer, and a routine that can be compiled as part of a program and used to exchange model runs with the run manager. Because communication is via a standard protocol (TCP/IP), any computer connected to the Internet can serve in any of the capacities offered by this suite. Model independence is consistent with the existing template and instruction file protocols of the widely used PEST parameter estimation program. This report describes (1) the problem addressed; (2) the approach used by GENIE to queue, distribute, and retrieve model runs; and (3) user instructions, classes, and functions developed. It also includes (4) an example to illustrate the linking of GENIE with Parallel PEST using the interface routine.
Progress in superconductivity: The Indian Scenario
NASA Technical Reports Server (NTRS)
Multani, Manu; Mishra, V. K.
1995-01-01
India has made rapid progress in the field of high temperature superconductivity, beginning at the time of publication of the Zeitschrift fur Physik paper by Bednorz and Muller. Phase 1 of the program was conceived by the Department of Science & Technology of the Government of India. It consisted of 42 projects in the area of basic research, 23 projects in applications and 4 short-term demonstration studies. The second phase started in October 1991 and will run through March 1995. It consists of 50 basic research programs and 24 application programs. The total investment, mainly consisting of infrastructural development to supplement existing facilities and hiring younger people, has amounted to about Indian Rupees 40 crores, equivalent to about US$ 13 million. The expenditure for the period 1992-1997 shall be up to about Rs. 27 crores, equivalent to about US$ 9 million. The basic idea is to keep pace with developments around the world.
Processable Data Making in the Remote Server Sent by Android Phone as a GIS Data Collecting Tool
NASA Astrophysics Data System (ADS)
Karaagac, Abdullah; Bostancı, Bulent
2016-04-01
Mobile technologies are improving and getting cheaper everyday. Not only smart phones are improved much but also new types of mobile applications and sensors come with the smart phone together. Maps and navigation applications one of the most popular types of applications on these types. Most of these applications uses location services including GNSS, Wi Fi, cellular data and beacon services. Although these coordinate precision not very high, it is appropriate for many applications to utilize. Android is a mobile operating system based on Linux Kernel. It is compatible for varies mobile devices like smart phones, tablets, smart TV's, wearable technologies etc. Android has large capability for application development by using the open source libraries and device sensors like gyroscope, GNSS etc. Android Studio is the most popular integrated development environment (IDE) for Android devices, mainly developing by Google. It had been announced on May 16, 2013 at Google I/O conference. Android Studio is built upon Gradle architecture which is written in Java language. SQLite is a relational database operating system which has so common usage for mobile devices. It developed by using C programming library. It is mostly used via embedding into a software or application. It supports many operating systems including Android. Remote servers can be in several forms from high complexity to simplicity. For this project we will use a open source quad core board computer named Raspberry Pi 2. This device includes 900 MHz ARMv7 compatible quad core CPU, VideoCore IV GPU and 1 GB RAM. Although Raspberry Pi 2's main operating system is Raspbian, we use Debian which are both Linux based operating systems. Raspberry is compatible for many programming language, however some languages are optimized for this device. These are Python, Java, C, C++, Ruby, Perl and Squeak Smalltalk. In this paper, a mobile application will be developed to send coordinate and string data to a SQL database embedded to a remote server. The application will run on Android Operating System running mobile phone. The application will get the location information from the GNSS and cellular data. The user will enter the other information individually. These information will send by clicking a button to remote server which runs SQLite. All these informations will be convertible to any type of measure like type of coordinates could be converted from WGS 84 to ITRF.
NASA Technical Reports Server (NTRS)
Stanfill, D. F.
1994-01-01
Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.
1980-04-01
Supply. g. Design and Construction History. Laurel Run Dam was constructed in 1594 by Martin Cawley, a contractor from Archbald. The construction was...1T6Ace joly PHASE I INSPECTION REPORT -4 NATIONAL DAM INSPECTION PROGRAM Lime LAUREL RUN DAM PENNSYLVANIA GAS AND WATER COMPANY RESERVOIR AREA
Liew, Bernard X W; Morris, Susan; Keogh, Justin W L; Appleby, Brendyn; Netto, Kevin
2016-10-22
In recent years, athletes have ventured into ultra-endurance and adventure racing events, which tests their ability to race, navigate, and survive. These events often require race participants to carry some form of load, to bear equipment for navigation and survival purposes. Previous studies have reported specific alterations in biomechanics when running with load which potentially influence running performance and injury risk. We hypothesize that a biomechanically informed neuromuscular training program would optimize running mechanics during load carriage to a greater extent than a generic strength training program. This will be a two group, parallel randomized controlled trial design, with single assessor blinding. Thirty healthy runners will be recruited to participate in a six weeks neuromuscular training program. Participants will be randomized into either a generic training group, or a biomechanically informed training group. Primary outcomes include self-determined running velocity with a 20 % body weight load, jump power, hopping leg stiffness, knee extensor and triceps-surae strength. Secondary outcomes include running kinetics and kinematics. Assessments will occur at baseline and post-training. To our knowledge, no training programs are available that specifically targets a runner's ability to carry load while running. This will provide sport scientists and coaches with a foundation to base their exercise prescription on. ANZCTR ( ACTRN12616000023459 ) (14 Jan 2016).
On Why It Is Impossible to Prove that the BDX90 Dispatcher Implements a Time-sharing System
NASA Technical Reports Server (NTRS)
Boyer, R. S.; Moore, J. S.
1983-01-01
The Software Implemented Fault Tolerance SIFT system, is written in PASCAL except for about a page of machine code. The SIFT system implements a small time sharing system in which PASCAL programs for separate application tasks are executed according to a schedule with real time constraints. The PASCAL language has no provision for handling the notion of an interrupt such as the B930 clock interrupt. The PASCAL language also lacks the notion of running a PASCAL subroutine for a given amount of time, suspending it, saving away the suspension, and later activating the suspension. Machine code was used to overcome these inadequacies of PASCAL. Code which handles clock interrupts and suspends processes is called a dispatcher. The time sharing/virtual machine idea is completely destroyed by the reconfiguration task. After termination of the reconfiguration task, the tasks run by the dispatcher have no relation to those run before reconfiguration. It is impossible to view the dispatcher as a time-sharing system implementing virtual BDX930s running concurrently when one process can wipe out the others.
CLIPS, AppleEvents, and AppleScript: Integrating CLIPS with commercial software
NASA Technical Reports Server (NTRS)
Compton, Michael M.; Wolfe, Shawn R.
1994-01-01
Many of today's intelligent systems are comprised of several modules, perhaps written in different tools and languages, that together help solve the user's problem. These systems often employ a knowledge-based component that is not accessed directly by the user, but instead operates 'in the background' offering assistance to the user as necessary. In these types of modular systems, an efficient, flexible, and eady-to-use mechanism for sharing data between programs is crucial. To help permit transparent integration of CLIPS with other Macintosh applications, the AI Research Branch at NASA Ames Research Center has extended CLIPS to allow it to communicate transparently with other applications through two popular data-sharing mechanisms provided by the Macintosh operating system: Apple Events (a 'high-level' event mechanism for program-to-program communication), and AppleScript, a recently-released scripting language for the Macintosh. This capability permits other applications (running on either the same or a remote machine) to send a command to CLIPS, which then responds as if the command were typed into the CLIPS dialog window. Any result returned by the command is then automatically returned to the program that sent it. Likewise, CLIPS can send several types of Apple Events directly to other local or remote applications. This CLIPS system has been successfully integrated with a variety of commercial applications, including data collection programs, electronics forms packages, DBMS's, and email programs. These mechanisms can permit transparent user access to the knowledge base from within a commercial application, and allow a single copy of the knowledge base to service multiple users in a networked environment.
NASA Astrophysics Data System (ADS)
Terrett, D. L.
The basis of this report is 2 days spent with an AVS expert from DEC's CERN project office attempting to convert an ADAM application into an AVS module. The experiment was successful in that we succeeded in running a KAPPA application (ADD) as a module in an AVS network without modifying the applications program code in any way. We took many short cuts and it became clear that doing the job properly would be a major exercise, but we learned enough to know that the job is feasible and gained a clear idea of what the final system would look like and what it would be capable of.
28 CFR 544.34 - Inmate running events.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...
28 CFR 544.34 - Inmate running events.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...
28 CFR 544.34 - Inmate running events.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...
28 CFR 544.34 - Inmate running events.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...
28 CFR 544.34 - Inmate running events.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Inmate running events. 544.34 Section 544... EDUCATION Inmate Recreation Programs § 544.34 Inmate running events. Running events will ordinarily not... available for all inmate running events. ...
The Grid[Way] Job Template Manager, a tool for parameter sweeping
NASA Astrophysics Data System (ADS)
Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.
2011-04-01
Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.
The Student-Run Clinic: A New Opportunity for Psychiatric Education
ERIC Educational Resources Information Center
Schweitzer, Pernilla J.; Rice, Timothy R.
2012-01-01
Objective: Student-run clinics are increasingly common in medical schools across the United States and may provide new opportunities for psychiatric education. This study investigates the educational impact of a novel behavioral health program focused on depressive disorders at a student-run clinic. Method: The program was assessed through chart…
Compiling knowledge-based systems from KEE to Ada
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Bock, Conrad; Feldman, Roy
1990-01-01
The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
AXAF user interfaces for heterogeneous analysis environments
NASA Technical Reports Server (NTRS)
Mandel, Eric; Roll, John; Ackerman, Mark S.
1992-01-01
The AXAF Science Center (ASC) will develop software to support all facets of data center activities and user research for the AXAF X-ray Observatory, scheduled for launch in 1999. The goal is to provide astronomers with the ability to utilize heterogeneous data analysis packages, that is, to allow astronomers to pick the best packages for doing their scientific analysis. For example, ASC software will be based on IRAF, but non-IRAF programs will be incorporated into the data system where appropriate. Additionally, it is desired to allow AXAF users to mix ASC software with their own local software. The need to support heterogeneous analysis environments is not special to the AXAF project, and therefore finding mechanisms for coordinating heterogeneous programs is an important problem for astronomical software today. The approach to solving this problem has been to develop two interfaces that allow the scientific user to run heterogeneous programs together. The first is an IRAF-compatible parameter interface that provides non-IRAF programs with IRAF's parameter handling capabilities. Included in the interface is an application programming interface to manipulate parameters from within programs, and also a set of host programs to manipulate parameters at the command line or from within scripts. The parameter interface has been implemented to support parameter storage formats other than IRAF parameter files, allowing one, for example, to access parameters that are stored in data bases. An X Windows graphical user interface called 'agcl' has been developed, layered on top of the IRAF-compatible parameter interface, that provides a standard graphical mechanism for interacting with IRAF and non-IRAF programs. Users can edit parameters and run programs for both non-IRAF programs and IRAF tasks. The agcl interface allows one to communicate with any command line environment in a transparent manner and without any changes to the original environment. For example, the authors routinely layer the GUI on top of IRAF, ksh, SMongo, and IDL. The agcl, based on the facilities of a system called Answer Garden, also has sophisticated support for examining documentation and help files, asking questions of experts, and developing a knowledge base of frequently required information. Thus, the GUI becomes a total environment for running programs, accessing information, examining documents, and finding human assistance. Because the agcl can communicate with any command-line environment, most projects can make use of it easily. New applications are continually being found for these interfaces. It is the authors' intention to evolve the GUI and its underlying parameter interface in response to these needs - from users as well as developers - throughout the astronomy community. This presentation describes the capabilities and technology of the above user interface mechanisms and tools. It also discusses the design philosophies guiding the work, as well as hopes for the future.
Linear combination reading program for capture gamma rays
Tanner, Allan B.
1971-01-01
This program computes a weighting function, Qj, which gives a scalar output value of unity when applied to the spectrum of a desired element and a minimum value (considering statistics) when applied to spectra of materials not containing the desired element. Intermediate values are obtained for materials containing the desired element, in proportion to the amount of the element they contain. The program is written in the BASIC language in a format specific to the Hewlett-Packard 2000A Time-Sharing System, and is an adaptation of an earlier program for linear combination reading for X-ray fluorescence analysis (Tanner and Brinkerhoff, 1971). Following the program is a sample run from a study of the application of the linear combination technique to capture-gamma-ray analysis for calcium (report in preparation).
Macniven, Rona; Plater, Suzanne; Canuto, Karla; Dickson, Michelle; Gwynn, Josephine; Bauman, Adrian; Richards, Justin
2018-02-19
Physical inactivity is a key health risk among Aboriginal and Torres Strait Islander (Indigenous) Australians. We examined perceptions of the Indigenous Marathon Program (IMP) in a remote Torres Strait island community. Semi-structured interviews with community and program stakeholders (n = 18; 14 Indigenous) examined barriers and enablers to running and the influence of the IMP on the community. A questionnaire asked 104 running event participants (n = 42 Indigenous) about their physical activity behaviours, running motivation and perceptions of program impact. Qualitative data were analysed using thematic content analysis, and quantitative data were analysed using descriptive statistics. Interviews revealed six main themes: community readiness, changing social norms to adopt healthy lifestyles, importance of social support, program appeal to hard-to-reach population groups, program sustainability and initiation of broader healthy lifestyle ripple effects beyond running. Barriers to running in the community were personal (cultural attitudes; shyness) and environmental (infrastructure; weather; dogs). Enablers reflected potential strategies to overcome described barriers. Indigenous questionnaire respondents were more likely to report being inspired to run by IMP runners than non-Indigenous respondents. Positive "ripple" effects of the IMP on running and broader health were described to have occurred through local role modelling of healthy lifestyles by IMP runners that reduced levels of "shame" and embarrassment, a common barrier to physical activity among Indigenous Australians. A high initial level of community readiness for behaviour change was also reported. SO WHAT?: Strategies to overcome this "shame" factor and community readiness measurement should be incorporated into the design of future Indigenous physical activity programs. © 2018 Australian Health Promotion Association.
The treatment of medial tibial stress syndrome in athletes; a randomized clinical trial
2012-01-01
Background The only three randomized trials on the treatment of MTSS were all performed in military populations. The treatment options investigated in this study were not previously examined in athletes. This study investigated if functional outcome of three common treatment options for medial tibial stress syndrome (MTSS) in athletes in a non-military setting was the same. Methods The study design was randomized and multi-centered. Physical therapists and sports physicians referred athletes with MTSS to the hospital for inclusion. 81 athletes were assessed for eligibility of which 74 athletes were included and randomized to three treatment groups. Group one performed a graded running program, group two performed a graded running program with additional stretching and strengthening exercises for the calves, while group three performed a graded running program with an additional sports compression stocking. The primary outcome measure was: time to complete a running program (able to run 18 minutes with high intensity) and secondary outcome was: general satisfaction with treatment. Results 74 Athletes were randomized and included of which 14 did not complete the study due a lack of progress (18.9%). The data was analyzed on an intention-to-treat basis. Time to complete a running program and general satisfaction with the treatment were not significantly different between the three treatment groups. Conclusion This was the first randomized trial on the treatment of MTSS in athletes in a non-military setting. No differences were found between the groups for the time to complete a running program. Trial registration CCMO; NL23471.098.08 PMID:22464032
Programs To Optimize Spacecraft And Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Brauer, G. L.; Petersen, F. M.; Cornick, D.E.; Stevenson, R.; Olson, D. W.
1994-01-01
POST/6D POST is set of two computer programs providing ability to target and optimize trajectories of powered or unpowered spacecraft or aircraft operating at or near rotating planet. POST treats point-mass, three-degree-of-freedom case. 6D POST treats more-general rigid-body, six-degree-of-freedom (with point masses) case. Used to solve variety of performance, guidance, and flight-control problems for atmospheric and orbital vehicles. Applications include computation of performance or capability of vehicle in ascent, or orbit, and during entry into atmosphere, simulation and analysis of guidance and flight-control systems, dispersion-type analyses and analyses of loads, general-purpose six-degree-of-freedom simulation of controlled and uncontrolled vehicles, and validation of performance in six degrees of freedom. Written in FORTRAN 77 and C language. Two machine versions available: one for SUN-series computers running SunOS(TM) (LAR-14871) and one for Silicon Graphics IRIS computers running IRIX(TM) operating system (LAR-14869).
Durand, C.T.; Edwards, L.E.; Malinconico, M.L.; Powars, D.S.
2009-01-01
During 2005-2006, the International Continental Scientific Drilling Program and the U.S. Geological Survey drilled three continuous core holes into the Chesapeake Bay impact structure to a total depth of 1766.3 m. A collection of supplemental materials that presents a record of the core recovery and measurement data for the Eyreville cores is available on CD-ROM at the end of this volume and in the GSA Data Repository. The supplemental materials on the CD-ROM include digital photographs of each core box from the three core holes, tables of the three coring-run logs, as recorded on site, and a set of depth-conversion programs. In this chapter, the contents, purposes, and basic applications of the supplemental materials are briefly described. With this information, users can quickly decide if the materials will apply to their specific research needs. ?? 2009 The Geological Society of America.
A new version of a computer program for dynamical calculations of RHEED intensity oscillations
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej; Skrobas, Kazimierz
2006-01-01
We present a new version of the RHEED program which contains a graphical user interface enabling the use of the program in the graphical environment. The presented program also contains a graphical component which enables displaying program data at run-time through an easy-to-use graphical interface. New version program summaryTitle of program: RHEEDGr Catalogue identifier: ADWV Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version: ADUY Authors of the original program: A. Daniluk Does the new version supersede the original program: no Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used: Borland C++ Builder Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 5797 Number of bytes in distributed program, including test data, etc.: 588 121 Distribution format: tar.gz Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [1] under the one-beam condition. Reasons for the new version: Responding to the user feedback we designed a graphical package that enables displaying program data at run-time through an easy-to-use graphical interface. Summary of revisions:In the present form the code is an object-oriented extension of previous version [2]. Fig. 1 shows the static structure of classes and their possible relationships (i.e. inheritance, association, aggregation and dependency) in the code. The code has been modified and optimized to compile under the C++ Builder integrated development environment (IDE). A graphical user interface (GUI) for the program has been created. The application is a standard multiple document interface (MDI) project from Builder's object repository. The MDI application spawns child window that reside within the client window; the main form contains child object. We have added an original graphical component [3] which has been tested successfully in the C++ Builder programming environment under Microsoft Windows platform. Fig. 2 shows internal structure of the component. This diagram is a graphic presentation of the static view which shows a collection of declarative model elements, such as classes, types, and their relationships. Each of the model elements shown in Fig. 2 is manifested by one header file Graph2D.h, and one code file Graph2D.cpp. Fig. 3 sets the stage by showing the package which supplies the C++ Builder elements used in the component. Installation instructions of the TGraph2D.bpk package can be found in the new distribution. The program has been constructed according to the systems development live cycle (SDLC) methodology [4]. Typical running time: The typical running time is machine and user-parameters dependent. Unusual features of the program: The program is distributed in the form of a main project RHEEDGr.bpr with associated files, and should be compiled using Borland C++ Builder compilers version 5 or later.
Swan: A tool for porting CUDA programs to OpenCL
NASA Astrophysics Data System (ADS)
Harvey, M. J.; De Fabritiis, G.
2011-04-01
The use of modern, high-performance graphical processing units (GPUs) for acceleration of scientific computation has been widely reported. The majority of this work has used the CUDA programming model supported exclusively by GPUs manufactured by NVIDIA. An industry standardisation effort has recently produced the OpenCL specification for GPU programming. This offers the benefits of hardware-independence and reduced dependence on proprietary tool-chains. Here we describe a source-to-source translation tool, "Swan" for facilitating the conversion of an existing CUDA code to use the OpenCL model, as a means to aid programmers experienced with CUDA in evaluating OpenCL and alternative hardware. While the performance of equivalent OpenCL and CUDA code on fixed hardware should be comparable, we find that a real-world CUDA application ported to OpenCL exhibits an overall 50% increase in runtime, a reduction in performance attributable to the immaturity of contemporary compilers. The ported application is shown to have platform independence, running on both NVIDIA and AMD GPUs without modification. We conclude that OpenCL is a viable platform for developing portable GPU applications but that the more mature CUDA tools continue to provide best performance. Program summaryProgram title: Swan Catalogue identifier: AEIH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License version 2 No. of lines in distributed program, including test data, etc.: 17 736 No. of bytes in distributed program, including test data, etc.: 131 177 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 256 Mbytes Classification: 6.5 External routines: NVIDIA CUDA, OpenCL Nature of problem: Graphical Processing Units (GPUs) from NVIDIA are preferentially programed with the proprietary CUDA programming toolkit. An alternative programming model promoted as an industry standard, OpenCL, provides similar capabilities to CUDA and is also supported on non-NVIDIA hardware (including multicore ×86 CPUs, AMD GPUs and IBM Cell processors). The adaptation of a program from CUDA to OpenCL is relatively straightforward but laborious. The Swan tool facilitates this conversion. Solution method:Swan performs a translation of CUDA kernel source code into an OpenCL equivalent. It also generates the C source code for entry point functions, simplifying kernel invocation from the host program. A concise host-side API abstracts the CUDA and OpenCL APIs. A program adapted to use Swan has no dependency on the CUDA compiler for the host-side program. The converted program may be built for either CUDA or OpenCL, with the selection made at compile time. Restrictions: No support for CUDA C++ features Running time: Nominal
Recent results of high p(T) physics at the CDF II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsuno, Soushi; /Okayama U.
2005-02-01
The Tevatron Run II program has been in progress since 2001. The CDF experiment has accumulated roughly five times as much data as did Run I, with much improved detectors. Preliminary results from the CDF experiment are presented. The authors focus on recent high p{sub T} physics results in the Tevatron Run II program.
How Much of a "Running Start" Do Dual Enrollment Programs Provide Students?
ERIC Educational Resources Information Center
Cowan, James; Goldhaber, Dan
2015-01-01
We study a popular dual enrollment program in Washington State, "Running Start" using a new administrative database that links high school and postsecondary data. Conditional on prior high school performance, we find that students participating in Running Start are more likely to attend any college but less likely to attend four-year…
Automatic Parallelization of Numerical Python Applications using the Global Arrays Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daily, Jeffrey A.; Lewis, Robert R.
2011-11-30
Global Arrays is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays. The NumPy module is the de facto standard for numerical calculation in the Python programming language, a language whose use is growing rapidly in the scientific and engineering communities. NumPy provides a powerful N-dimensional array class as well as other scientific computing capabilities. However, like the majority of the core Python modules, NumPy is inherently serial. Using a combination of Global Arrays and NumPy, we have reimplemented NumPy as a distributed drop-in replacement calledmore » Global Arrays in NumPy (GAiN). Serial NumPy applications can become parallel, scalable GAiN applications with only minor source code changes. Scalability studies of several different GAiN applications will be presented showing the utility of developing serial NumPy codes which can later run on more capable clusters or supercomputers.« less
Design and implementation of a cloud based lithography illumination pupil processing application
NASA Astrophysics Data System (ADS)
Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie
2017-02-01
Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.
NASA Astrophysics Data System (ADS)
Golonka, P.; Pierzchała, T.; Waş, Z.
2004-02-01
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events. Program summaryTitle of the program:MC-TESTER, version 1.1 Catalogue identifier: ADSM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAM Operating system: Linux Red Hat 6.1, 7.2, and also 8.0 Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77 Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB). No. of bytes in distributed program, including test data, etc.: 2 024 425 Distribution format: tar gzip file Additional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet). Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparison Nature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties. Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table. Restrictions on the complexity of the problem: For a list of limitations see Section 6. Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100 000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; ? processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines. Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.
Merlin - Massively parallel heterogeneous computing
NASA Technical Reports Server (NTRS)
Wittie, Larry; Maples, Creve
1989-01-01
Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.
An application of interactive graphics to neutron spectrometry
NASA Technical Reports Server (NTRS)
Binney, S. E.
1972-01-01
The use of interactive graphics is presented as an attractive method for performing multi-parameter data analysis of proton recoil distributions to determine neutron spectra. Interactive graphics allows the user to view results on-line as the program is running and to maintain maximum control over the path along which the calculation will proceed. Other advantages include less time to obtain results and freedom from handling paper tapes and IBM cards.
2018-04-20
Inside a Shuttle Landing Facility hangar at NASA's Kennedy Space Center in Florida, two MRAP armored vehicles are prepared for a training drive to support the agency's Commercial Crew Program. The 45,000-pound mine-resistant ambush protected vehicle, or MRAP, was originally designed for military applications. The MRAP offers a mobile bunker for astronauts and ground crews in the unlikely event they have to get away from the launch pad quickly in an emergency.
2018-04-20
Inside a Shuttle Landing Facility hangar at NASA's Kennedy Space Center in Florida, an MRAP armored vehicle is prepared for a training drive to support the agency's Commercial Crew Program. The 45,000-pound mine-resistant ambush protected vehicle, or MRAP, was originally designed for military applications. The MRAP offers a mobile bunker for astronauts and ground crews in the unlikely event they have to get away from the launch pad quickly in an emergency.
ERIC Educational Resources Information Center
Bettinger, Eric; Gurantz, Oded; Kawano, Laura; Sacerdote, Bruce
2016-01-01
We examine the impacts of being awarded a Cal Grant, among the most generous state merit aid programs. We exploit variation in eligibility rules using GPA and family income cutoffs that are ex ante unknown to applicants. Cal Grant eligibility increases degree completion by 2 to 5 percentage points in our reduced form estimates. Cal Grant also…
NBS computerized carpool matching system: users' guide. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilsinn, J.F.; Landau, S.
1974-12-01
The report includes flowcharts, input/output formats, and program listings for the programs, plus details of the manual process for coordinate coding. The matching program produces, for each person desiring it, a list of others residing within a pre-specified distance of him, and is thus applicable to a single work destination having primarily one work schedule. The system is currently operational on the National Bureau of Standards' UNIVAC 1108 computer and was run in March of 1974, producing lists for about 950 employees in less than four minutes computer time. Subsequent maintenance of the system will be carried out by themore » NBS Management and Organization Division. (GRA)« less
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Instructions for using Vertical Attitude Takeoff and Landing Aircraft Simulation (VATLAS), the digital simulation program for application to vertical attitude takeoff and landing (VATOL) aircraft developed for installation on the NASA Ames CDC 7600 computer system are described. The framework for VATLAS is the Off-Line Simulation (OLSIM) routine. The OLSIM routine provides a flexible framework and standardized modules which facilitate the development of off-line aircraft simulations. OLSIM runs under the control of VTOLTH, the main program, which calls the proper modules for executing user specified options. These options include trim, stability derivative calculation, time history generation, and various input-output options.
Denadai, Benedito S; Ortiz, Marcelo J; Greco, Camila C; de Mello, Marco T
2006-12-01
The objective of this study was to analyze the effect of two different high-intensity interval training (HIT) programs on selected aerobic physiological indices and 1500 and 5000 m running performance in well-trained runners. The following tests were completed (n=17): (i) incremental treadmill test to determine maximal oxygen uptake (VO2 max), running velocity associated with VO2 max (vVO2 max), and the velocity corresponding to 3.5 mmol/L of blood lactate concentration (vOBLA); (ii) submaximal constant-intensity test to determine running economy (RE); and (iii) 1500 and 5000 m time trials on a 400 m track. Runners were then randomized into 95% vVO2 max or 100% vVO2 max groups, and undertook a 4 week training program consisting of 2 HIT sessions (performed at 95% or 100% vVO2 max, respectively) and 4 submaximal run sessions per week. Runners were retested on all parameters at the completion of the training program. The VO2 max values were not different after training for both groups. There was a significant increase in post-training vVO2 max, RE, and 1500 m running performance in the 100% vVO2 max group. The vOBLA and 5000 m running performance were significantly higher after the training period for both groups. We conclude that vOBLA and 5000 m running performance can be significantly improved in well-trained runners using a 4 week training program consisting of 2 HIT sessions (performed at 95% or 100% vVO2 max) and 4 submaximal run sessions per week. However, the improvement in vVO2 max, RE, and 1500 m running performance seems to be dependent on the HIT program at 100% vVO2 max.
Web Platform for Sharing Modeling Software in the Field of Nonlinear Optics
NASA Astrophysics Data System (ADS)
Dubenskaya, Julia; Kryukov, Alexander; Demichev, Andrey
2018-02-01
We describe the prototype of a Web platform intended for sharing software programs for computer modeling in the rapidly developing field of the nonlinear optics phenomena. The suggested platform is built on the top of the HUBZero open-source middleware. In addition to the basic HUBZero installation we added to our platform the capability to run Docker containers via an external application server and to send calculation programs to those containers for execution. The presented web platform provides a wide range of features and might be of benefit to nonlinear optics researchers.
Application of TURBO-AE to Flutter Prediction: Aeroelastic Code Development
NASA Technical Reports Server (NTRS)
Hoyniak, Daniel; Simons, Todd A.; Stefko, George (Technical Monitor)
2001-01-01
The TURBO-AE program has been evaluated by comparing the obtained results to cascade rig data and to prediction made from various in-house programs. A high-speed fan cascade, a turbine cascade, a turbine cascade and a fan geometry that shower flutter in torsion mode were analyzed. The steady predictions for the high-speed fan cascade showed the TURBO-AE predictions to match in-house codes. However, the predictions did not match the measured blade surface data. Other researchers also reported similar disagreement with these data set. Unsteady runs for the fan configuration were not successful using TURBO-AE .
Adiabatic Wankel type rotary engine
NASA Technical Reports Server (NTRS)
Kamo, R.; Badgley, P.; Doup, D.
1988-01-01
This SBIR Phase program accomplished the objective of advancing the technology of the Wankel type rotary engine for aircraft applications through the use of adiabatic engine technology. Based on the results of this program, technology is in place to provide a rotor and side and intermediate housings with thermal barrier coatings. A detailed cycle analysis of the NASA 1007R Direct Injection Stratified Charge (DISC) rotary engine was performed which concluded that applying thermal barrier coatings to the rotor should be successful and that it was unlikely that the rotor housing could be successfully run with thermal barrier coatings as the thermal stresses were extensive.
Robotics Technology Crosscutting Program. Technology summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Robotics Technology Development Program (RTDP) is a needs-driven effort. A length series of presentations and discussions at DOE sites considered critical to DOE`s Environmental Restoration and Waste Management (EM) Programs resulted in a clear understanding of needed robotics applications toward resolving definitive problems at the sites. A detailed analysis of the resulting robotics needs assessment revealed several common threads running through the sites: Tank Waste Retrieval (TWR), Contaminant Analysis Automation (CAA), Mixed Waste Operations (MWO), and Decontamination and Dismantlement (D and D). The RTDP Group also realized that some of the technology development in these four areas had commonmore » (Cross Cutting-CC) needs, for example, computer control and sensor interface protocols. Further, the OTD approach to the Research, Development, Demonstration, Testing, and Evaluation (RDDT and E) process urged an additional organizational breakdown between short-term (1--3 years) and long-term (3--5 years) efforts (Advanced Technology-AT). These factors lead to the formation of the fifth application area for Crosscutting and Advanced Technology (CC and AT) development. The RTDP is thus organized around these application areas -- TWR, CAA, MWO, D and D, and CC and AT -- with the first four developing short-term applied robotics. An RTDP Five-Year Plan was developed for organizing the Program to meet the needs in these application areas.« less
"I Think I Can . . . Maybe I Can . . . I Can't": Social Work Women and Local Elected Office.
Meehan, Patrick
2018-04-01
If women are more interested in running for office, it should be observable in MSW students. Not only are the majority of students women, but they have experienced a dramatic change in political fortunes within the last year. However, the 2016 election may be leading women to doubt their qualifications to run. Using survey data from 545 MSW students and 200 law students, this study considers how interested women are in running for office and what barriers they perceive to doing so. Results suggest that women in MSW programs were significantly more interested in running for local office (city council, school board, county commission) than women in law school. At the same time, women in MSW programs were significantly more likely to doubt their qualifications to run for local office, which significantly decreased their interest in running. Content analysis revealed that women felt this way because they did not believe they had the knowledge and experience to run for local office. These results suggest that field placements in political offices might be a way to provide women in MSW programs with knowledge and experience that increases their sense of qualification to run for local office.
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
NASA Astrophysics Data System (ADS)
Larour, Eric; Cheng, Daniel; Perez, Gilberto; Quinn, Justin; Morlighem, Mathieu; Duong, Bao; Nguyen, Lan; Petrie, Kit; Harounian, Silva; Halkides, Daria; Hayes, Wayne
2017-12-01
Earth system models (ESMs) are becoming increasingly complex, requiring extensive knowledge and experience to deploy and use in an efficient manner. They run on high-performance architectures that are significantly different from the everyday environments that scientists use to pre- and post-process results (i.e., MATLAB, Python). This results in models that are hard to use for non-specialists and are increasingly specific in their application. It also makes them relatively inaccessible to the wider science community, not to mention to the general public. Here, we present a new software/model paradigm that attempts to bridge the gap between the science community and the complexity of ESMs by developing a new JavaScript application program interface (API) for the Ice Sheet System Model (ISSM). The aforementioned API allows cryosphere scientists to run ISSM on the client side of a web page within the JavaScript environment. When combined with a web server running ISSM (using a Python API), it enables the serving of ISSM computations in an easy and straightforward way. The deep integration and similarities between all the APIs in ISSM (MATLAB, Python, and now JavaScript) significantly shortens and simplifies the turnaround of state-of-the-art science runs and their use by the larger community. We demonstrate our approach via a new Virtual Earth System Laboratory (VESL) website (http://vesl.jpl.nasa.gov, VESL(2017)).
A Compiler and Run-time System for Network Programming Languages
2012-01-01
A Compiler and Run-time System for Network Programming Languages Christopher Monsanto Princeton University Nate Foster Cornell University Rob...Foster, R. Harrison, M. Freedman, C. Monsanto , J. Rexford, A. Story, and D. Walker. Frenetic: A network programming language. In ICFP, Sep 2011. [10] A
NASA Astrophysics Data System (ADS)
Skouteris, Dimitris; Gervasi, Osvaldo; Laganà, Antonio
2009-03-01
A program that uses the time-dependent wavepacket method to study the motion of structureless particles in a force field of quasi-cylindrical symmetry is presented here. The program utilises cylindrical polar coordinates to express the wavepacket, which is subsequently propagated using a Chebyshev expansion of the Schrödinger propagator. Time-dependent exit flux as well as energy-dependent S matrix elements can be obtained for all states of the particle (describing its angular momentum component along the nanotube axis and the excitation of the radial degree of freedom in the cylinder). The program has been used to study the motion of an H atom across a carbon nanotube. Program summaryProgram title: CYLWAVE Catalogue identifier: AECL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3673 No. of bytes in distributed program, including test data, etc.: 35 237 Distribution format: tar.gz Programming language: Fortran 77 Computer: RISC workstations Operating system: UNIX RAM: 120 MBytes Classification: 16.7, 16.10 External routines: SUNSOFT performance library (not essential) TFFT2D.F (Temperton Fast Fourier Transform), BESSJ.F (from Numerical Recipes, for the calculation of Bessel functions) (included in the distribution file). Nature of problem: Time evolution of the state of a structureless particle in a quasicylindrical potential. Solution method: Time dependent wavepacket propagation. Running time: 50000 secs. The test run supplied with the distribution takes about 10 minutes to complete.
Software Template for Instruction in Mathematics
NASA Technical Reports Server (NTRS)
Shelton, Robert O.; Moebes, Travis A.; Beall, Anna
2005-01-01
Intelligent Math Tutor (IMT) is a software system that serves as a template for creating software for teaching mathematics. IMT can be easily connected to artificial-intelligence software and other analysis software through input and output of files. IMT provides an easy-to-use interface for generating courses that include tests that contain both multiple-choice and fill-in-the-blank questions, and enables tracking of test scores. IMT makes it easy to generate software for Web-based courses or to manufacture compact disks containing executable course software. IMT also can function as a Web-based application program, with features that run quickly on the Web, while retaining the intelligence of a high-level language application program with many graphics. IMT can be used to write application programs in text, graphics, and/or sound, so that the programs can be tailored to the needs of most handicapped persons. The course software generated by IMT follows a "back to basics" approach of teaching mathematics by inducing the student to apply creative mathematical techniques in the process of learning. Students are thereby made to discover mathematical fundamentals and thereby come to understand mathematics more deeply than they could through simple memorization.
Tri-Laboratory Linux Capacity Cluster 2007 SOW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, M
2007-03-22
The Advanced Simulation and Computing (ASC) Program (formerly know as Accelerated Strategic Computing Initiative, ASCI) has led the world in capability computing for the last ten years. Capability computing is defined as a world-class platform (in the Top10 of the Top500.org list) with scientific simulations running at scale on the platform. Example systems are ASCI Red, Blue-Pacific, Blue-Mountain, White, Q, RedStorm, and Purple. ASC applications have scaled to multiple thousands of CPUs and accomplished a long list of mission milestones on these ASC capability platforms. However, the computing demands of the ASC and Stockpile Stewardship programs also include a vastmore » number of smaller scale runs for day-to-day simulations. Indeed, every 'hero' capability run requires many hundreds to thousands of much smaller runs in preparation and post processing activities. In addition, there are many aspects of the Stockpile Stewardship Program (SSP) that can be directly accomplished with these so-called 'capacity' calculations. The need for capacity is now so great within the program that it is increasingly difficult to allocate the computer resources required by the larger capability runs. To rectify the current 'capacity' computing resource shortfall, the ASC program has allocated a large portion of the overall ASC platforms budget to 'capacity' systems. In addition, within the next five to ten years the Life Extension Programs (LEPs) for major nuclear weapons systems must be accomplished. These LEPs and other SSP programmatic elements will further drive the need for capacity calculations and hence 'capacity' systems as well as future ASC capability calculations on 'capability' systems. To respond to this new workload analysis, the ASC program will be making a large sustained strategic investment in these capacity systems over the next ten years, starting with the United States Government Fiscal Year 2007 (GFY07). However, given the growing need for 'capability' systems as well, the budget demands are extreme and new, more cost effective ways of fielding these systems must be developed. This Tri-Laboratory Linux Capacity Cluster (TLCC) procurement represents the ASC first investment vehicle in these capacity systems. It also represents a new strategy for quickly building, fielding and integrating many Linux clusters of various sizes into classified and unclassified production service through a concept of Scalable Units (SU). The programmatic objective is to dramatically reduce the overall Total Cost of Ownership (TCO) of these 'capacity' systems relative to the best practices in Linux Cluster deployments today. This objective only makes sense in the context of these systems quickly becoming very robust and useful production clusters under the crushing load that will be inflicted on them by the ASC and SSP scientific simulation capacity workload.« less
ERIC Educational Resources Information Center
Cowan, James; Goldhaber, Dan
2014-01-01
We study a popular dual enrollment program in Washington State, "Running Start" using a new administrative database that links high school and postsecondary data. Conditional on prior high school performance, we find that students participating in Running Start are more likely to attend any college but less likely to attend four-year…
Parallel programming of industrial applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heroux, M; Koniges, A; Simon, H
1998-07-21
In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from thesemore » applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).« less
Experimental particle physics research at Texas Tech University
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akchurin, Nural; Lee, Sung-Won; Volobouev, Igor
The high energy physics group at Texas Tech University (TTU) concentrates its research efforts on the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) and on generic detector R&D for future applications. Our research programs have been continuously supported by the US Department of Energy for over two decades, and this final report summarizes our achievements during the last grant period from May 1, 2012 to March 31, 2016. After having completed the Run 1 data analyses from the CMS detector, including the discovery of the Higgs boson in July 2012, we concentrated on commissioning the CMSmore » hadron calorimeter (HCAL) for Run 2, performing analyses of Run 2 data, and making initial studies and plans for the second phase of upgrades in CMS. Our research has primarily focused on searches for Beyond Standard Model (BSM) physics via dijets, monophotons, and monojets. We also made significant contributions to the analyses of the semileptonic Higgs decays and Standard Model (SM) measurements in Run 1. Our work on the operations of the CMS detector, especially the performance monitoring of the HCAL in Run 1, was indispensable to the experiment. Our team members, holding leadership positions in HCAL, have played key roles in the R&D, construction, and commissioning of these detectors in the last decade. We also maintained an active program in jet studies that builds on our expertise in calorimetry and algorithm development. In Run 2, we extended some of our analyses at 8 TeV to 13 TeV, and we also started to investigate new territory, e.g., dark matter searches with unexplored signatures. The objective of dual-readout calorimetry R&D was intended to explore (and, if possible, eliminate) the obstacles that prevent calorimetric detection of hadrons and jets with a comparable level of precision as we have grown accustomed to for electrons and photons. The initial prototype detector was successfully tested at the SPS/CERN in 2003-2004 and evolved over the last decade. In 2012-2015, several other prototypes were built to further reduce leakage fluctuations, improve Cherenkov light yield, increase fiber attenuation length, and other related phenomena. During this grant period, we graduated two students with Ph.D. degrees, and five undergraduate students from our labs went on to prestigious graduate programs in the US and Europe. Also, the TTU HEP team has participated in the QuarkNet program every year since 2001. We are dedicated to working with area teachers and students at all levels and to training the next generation of scientists. Over 20 high school teachers have participated in our program since its inception.« less
Synthetic Proxy Infrastructure for Task Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Pavel, Robert
The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less
McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P
2010-01-01
Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.
RunJumpCode: An Educational Game for Educating Programming
ERIC Educational Resources Information Center
Hinds, Matthew; Baghaei, Nilufar; Ragon, Pedrito; Lambert, Jonathon; Rajakaruna, Tharindu; Houghton, Travers; Dacey, Simon
2017-01-01
Programming promotes critical thinking, problem solving and analytic skills through creating solutions that can solve everyday problems. However, learning programming can be a daunting experience for a lot of students. "RunJumpCode" is an educational 2D platformer video game, designed and developed in Unity, to teach players the…
DNASynth: a software application to optimization of artificial gene synthesis
NASA Astrophysics Data System (ADS)
Muczyński, Jan; Nowak, Robert M.
2017-08-01
DNASynth is a client-server software application in which the client runs in a web browser. The aim of this program is to support and optimize process of artificial gene synthesizing using Ligase Chain Reaction. Thanks to LCR it is possible to obtain DNA strand coding defined by user peptide. The DNA sequence is calculated by optimization algorithm that consider optimal codon usage, minimal energy of secondary structures and minimal number of required LCR. Additionally absence of sequences characteristic for defined by user set of restriction enzymes is guaranteed. The presented software was tested on synthetic and real data.
SuperState: a computer program for the control of operant behavioral experimentation.
Zhang, Fuqiang
2006-09-15
Operant behavioral researches require precise control of experimental devices for delivering stimuli and monitoring behavioral responses. The author developed a software solution named SuperState for controlling hardware devices and running reinforcement schedules. The Microsoft Windows compatible software was written by use of an object-oriented programming language Borland Delphi 5.0, which has simplified the programming of the application. SuperState is a stand-alone easy-to-use green software, without the need for the experimenter to master any scripting languages. It features: (1) control of multiple operant cages running independent reinforcement schedules; (2) enough cage devices (16 digital inputs and 16 digital outputs for each cage) suitable for the need of most operant behavioral equipments; (3) control of most standard ISA-type digital interface cards including Med-Associates Super-port cards and a PCI-type card AC6412, and highly expandable to support other PCI-type interface cards; (4) high-resolution device control (1ms); (5) a built-in real-time cumulative recorder; (6) extensive data analyzing including event recorder, cumulative recorder, block analyzing; the summarized results can be transferred easily to Microsoft Excel spreadsheets through the Clipboard.
Accelerating sino-atrium computer simulations with graphic processing units.
Zhang, Hong; Xiao, Zheng; Lin, Shien-fong
2015-01-01
Sino-atrial node cells (SANCs) play a significant role in rhythmic firing. To investigate their role in arrhythmia and interactions with the atrium, computer simulations based on cellular dynamic mathematical models are generally used. However, the large-scale computation usually makes research difficult, given the limited computational power of Central Processing Units (CPUs). In this paper, an accelerating approach with Graphic Processing Units (GPUs) is proposed in a simulation consisting of the SAN tissue and the adjoining atrium. By using the operator splitting method, the computational task was made parallel. Three parallelization strategies were then put forward. The strategy with the shortest running time was further optimized by considering block size, data transfer and partition. The results showed that for a simulation with 500 SANCs and 30 atrial cells, the execution time taken by the non-optimized program decreased 62% with respect to a serial program running on CPU. The execution time decreased by 80% after the program was optimized. The larger the tissue was, the more significant the acceleration became. The results demonstrated the effectiveness of the proposed GPU-accelerating methods and their promising applications in more complicated biological simulations.
Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study
Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...
2015-01-01
This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less
Computational Methods for Feedback Controllers for Aerodynamics Flow Applications
2007-08-15
Iteration #, and y-translation by: »> Fy=[unf(:,8);runA(:,8);runB(:,8);runC(:,8);runD(:,S); runE (:,8)]; >> Oy-[unf(:,23) ;runA(:,23) ;runB(:,23) ;runC(:,23...runD(:,23) ; runE (:,23)]; >> Iter-[unf(:,1);runA(U ,l);runB(:,l);runC(:,l) ;runD(:,l); runE (:,l)]; >> plot(Fy) Cobalt version 4.0 €blso!,,tic,,. ř-21
MCdevelop - a universal framework for Stochastic Simulations
NASA Astrophysics Data System (ADS)
Slawinska, M.; Jadach, S.
2011-03-01
We present MCdevelop, a universal computer framework for developing and exploiting the wide class of Stochastic Simulations (SS) software. This powerful universal SS software development tool has been derived from a series of scientific projects for precision calculations in high energy physics (HEP), which feature a wide range of functionality in the SS software needed for advanced precision Quantum Field Theory calculations for the past LEP experiments and for the ongoing LHC experiments at CERN, Geneva. MCdevelop is a "spin-off" product of HEP to be exploited in other areas, while it will still serve to develop new SS software for HEP experiments. Typically SS involve independent generation of large sets of random "events", often requiring considerable CPU power. Since SS jobs usually do not share memory it makes them easy to parallelize. The efficient development, testing and running in parallel SS software requires a convenient framework to develop software source code, deploy and monitor batch jobs, merge and analyse results from multiple parallel jobs, even before the production runs are terminated. Throughout the years of development of stochastic simulations for HEP, a sophisticated framework featuring all the above mentioned functionality has been implemented. MCdevelop represents its latest version, written mostly in C++ (GNU compiler gcc). It uses Autotools to build binaries (optionally managed within the KDevelop 3.5.3 Integrated Development Environment (IDE)). It uses the open-source ROOT package for histogramming, graphics and the mechanism of persistency for the C++ objects. MCdevelop helps to run multiple parallel jobs on any computer cluster with NQS-type batch system. Program summaryProgram title:MCdevelop Catalogue identifier: AEHW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 48 136 No. of bytes in distributed program, including test data, etc.: 355 698 Distribution format: tar.gz Programming language: ANSI C++ Computer: Any computer system or cluster with C++ compiler and UNIX-like operating system. Operating system: Most UNIX systems, Linux. The application programs were thoroughly tested under Ubuntu 7.04, 8.04 and CERN Scientific Linux 5. Has the code been vectorised or parallelised?: Tools (scripts) for optional parallelisation on a PC farm are included. RAM: 500 bytes Classification: 11.3 External routines: ROOT package version 5.0 or higher ( http://root.cern.ch/drupal/). Nature of problem: Developing any type of stochastic simulation program for high energy physics and other areas. Solution method: Object Oriented programming in C++ with added persistency mechanism, batch scripts for running on PC farms and Autotools.
ERIC Educational Resources Information Center
Sheehan, George A.
This book is both a personal and technical account of the experience of running by a heart specialist who began a running program at the age of 45. In its seventeen chapters, there is information presented on the spiritual, psychological, and physiological results of running; treatment of athletic injuries resulting from running; effects of diet…
Identification of Program Signatures from Cloud Computing System Telemetry Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, Nicole M.; Greaves, Mark T.; Smith, William P.
Malicious cloud computing activity can take many forms, including running unauthorized programs in a virtual environment. Detection of these malicious activities while preserving the privacy of the user is an important research challenge. Prior work has shown the potential viability of using cloud service billing metrics as a mechanism for proxy identification of malicious programs. Previously this novel detection method has been evaluated in a synthetic and isolated computational environment. In this paper we demonstrate the ability of billing metrics to identify programs, in an active cloud computing environment, including multiple virtual machines running on the same hypervisor. The openmore » source cloud computing platform OpenStack, is used for private cloud management at Pacific Northwest National Laboratory. OpenStack provides a billing tool (Ceilometer) to collect system telemetry measurements. We identify four different programs running on four virtual machines under the same cloud user account. Programs were identified with up to 95% accuracy. This accuracy is dependent on the distinctiveness of telemetry measurements for the specific programs we tested. Future work will examine the scalability of this approach for a larger selection of programs to better understand the uniqueness needed to identify a program. Additionally, future work should address the separation of signatures when multiple programs are running on the same virtual machine.« less
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1991-01-01
The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.
The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)
1997-01-01
Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.
Toward smartphone applications for geoparks information and interpretation systems in China
NASA Astrophysics Data System (ADS)
Li, Qian; Tian, Mingzhong; Li, Xingle; Shi, Yihua; Zhou, Xu
2015-11-01
Geopark information and interpretation systems are both necessary infrastructure in geopark planning and construction program, and they are also essential for geoeducation and geoconservation in geopark tourism. The current state and development of information and interpretation systems in China's geoparks were presented and analyzed in this paper. Statistics showed that fewer than half of geoparks run websites, and less than that amount maintained database, and less than one percent of all Internet/smartphone applications were used for geopark tourism. The results of our analysis indicated that smartphone applications in geopark information and interpretation systems would provide benefits such as accelerated geopark science popularization and education and facilitated interactive communication between geoparks and tourists.
Measurement and Modeling of Fugitive Dust from Off Road DoD Activities
2017-12-08
each soil and vehicle type (see Table 2). Note, no tracked vehicles were run at YTC. CT is the curve track sampling location, CR is the curve ridge...Soil is SL = sandy loam. ...................... 116 Figure 35. Single-event Wind Erosion Evaluation Program (SWEEP) Run example results. ... 121...Figure 36. Single-event Wind Erosion Evaluation Program (SWEEP) Threshold Run example results screen
Effects of Physical Training in Military Populations: A Meta-Analytic Summary
2010-10-25
variation on standard training. The experiment introduced ability group runs, stretching, movement drills, and calisthenics . The calisthenics ...advanced training. The new program combined progressive calisthenics with movement exercises, interval running, and ability-group endurance runs. The new...al. (2004) Modified Calisthenics Program in Advanced Training Outcome Gender g SE ESa zb Sig Sit-ups Men .38 .04 .14 3.45 .000 Women .43
Image-Processing Software For A Hypercube Computer
NASA Technical Reports Server (NTRS)
Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.
1992-01-01
Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.
Application of Detailed Chemical Kinetics to Combustion Instability Modeling
2016-01-04
Modeling 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Harvazinski, Matt; Talley, Doug; Sankaran, Venke 5d. PROJECT...Chemical Kinetics to Combustion Instability Modeling Matt Harvazinski, Doug Talley, Venke Sankaran Air Force Research Laboratory Edwards AFB, CA...distribution unlimited. 3 Prior Work – Kinetics Used • Simulations : 1) 3D real geometry 2) Unsteady 3) Long run-times 4) Coupled physics • 1- 4
Computing Spacecraft Solar-Cell Damage by Charged Particles
NASA Technical Reports Server (NTRS)
Gaddy, Edward M.
2006-01-01
General EQFlux is a computer program that converts the measure of the damage done to solar cells in outer space by impingement of electrons and protons having many different kinetic energies into the measure of the damage done by an equivalent fluence of electrons, each having kinetic energy of 1 MeV. Prior to the development of General EQFlux, there was no single computer program offering this capability: For a given type of solar cell, it was necessary to either perform the calculations manually or to use one of three Fortran programs, each of which was applicable to only one type of solar cell. The problem in developing General EQFlux was to rewrite and combine the three programs into a single program that could perform the calculations for three types of solar cells and run in a Windows environment with a Windows graphical user interface. In comparison with the three prior programs, General EQFlux is easier to use.
Highlights of X-Stack ExM Deliverable Swift/T
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wozniak, Justin M.
Swift/T is a key success from the ExM: System support for extreme-scale, many-task applications1 X-Stack project, which proposed to use concurrent dataflow as an innovative programming model to exploit extreme parallelism in exascale computers. The Swift/T component of the project reimplemented the Swift language from scratch to allow applications that compose scientific modules together to be build and run on available petascale computers (Blue Gene, Cray). Swift/T does this via a new compiler and runtime that generates and executes the application as an MPI program. We assume that mission-critical emerging exascale applications will be composed as scalable applications using existingmore » software components, connected by data dependencies. Developers wrap native code fragments using a higherlevel language, then build composite applications to form a computational experiment. This exemplifies hierarchical concurrency: lower-level messaging libraries are used for fine-grained parallelism; highlevel control is used for inter-task coordination. These patterns are best expressed with dataflow, but static DAGs (i.e., other workflow languages) limit the applications that can be built; they do not provide the expressiveness of Swift, such as conditional execution, iteration, and recursive functions.« less
Using Junior Achievement as a Vocational Option for Youth with Special Needs.
ERIC Educational Resources Information Center
Schoff, Patty
Junior Achievement (JA) offers high school students its traditional evening program, in which business advisors help students run their own mini-businesses. In 1980, JA offered this program to mentally, emotionally, and physically disabled students aged 16-21. The special needs component operates an in-class program where students run companies…
Teaching Evaluation: A Student-Run Consulting Firm
ERIC Educational Resources Information Center
Cundiff, Nicole; Nadler, Joel; Scribner, Shauna
2011-01-01
Applied Research Consultants (ARC) is a graduate student run consulting firm that provides experience to students in evaluation and consultation. An overview of this program has been compiled in order to serve as a model of a graduate training practicum that could be applied to similar programs or aid in the development of such programs. Key…
Decay of super-heavy particles: user guide of the SHdecay program
NASA Astrophysics Data System (ADS)
Barbot, C.
2004-02-01
I give here a detailed user guide for the C++ program SHdecay, which has been developed for computing the final spectra of stable particles (protons, photons, LSPs, electrons, neutrinos of the three species and their antiparticles) arising from the decay of a super-heavy X particle. It allows to compute in great detail the complete decay cascade for any given decay mode into particles of the Minimal Supersymmetric Standard Model (MSSM). In particular, it takes into account all interactions of the MSSM during the perturbative cascade (including not only QCD, but also the electroweak and 3rd generation Yukawa interactions), and includes a detailed treatment of the SUSY decay cascade (for a given set of parameters) and of the non-perturbative hadronization process. All these features allow us to ensure energy conservation over the whole cascade up to a numerical accuracy of a few per mille. Yet, this program also allows to restrict the computation to QCD or SUSY-QCD frameworks. I detail the input and output files, describe the role of each part of the program, and include some advice for using it best. Program summaryTitle of program: SHdecay Catalogue identifier:ADSL Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSL Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer and operating system: Program tested on PC running Linux KDE and Suse 8.1 Programming language used: C with STL C++ library and using the standard gnu g++ compiler No. lines in distributed program: 14 955 No. of bytes in distributed program, including test data, etc.: 624 487 Distribution format: tar gzip file Keywords: Super-heavy particles, fragmentation functions, DGLAP equations, supersymmetry, MSSM, UHECR Nature of physical problem: Obtaining the energy spectra of the final stable decay products (protons, photons, electrons, the three species of neutrinos and the LSPs) of a decaying super-heavy X particle, within the framework of the Minimal Supersymmetric Standard Model (MSSM). It can be done numerically by solving the full set of DGLAP equations in the MSSM for the perturbative evolution of the fragmentation functions Dp2p1( x, Q) of any particle p1 into any other p2 ( x is the energy fraction carried by the particle p2 and Q its virtuality), and by treating properly the different decay cascades of all unstable particles and the final hadronization of quarks and gluons. In order to obtain proper results at very low values of x (up to x˜10 -13), NLO color coherence effects have been included by using the Modified Leading Log Approximation (MLLA). Method of solution: the DGLAP equations are solved by a four order Runge-Kutta method with a fixed step. Typical running time: Around 35 hours for the first run, but the most time consuming sub-programs can be run only once for most applications.
SSL - THE SIMPLE SOCKETS LIBRARY
NASA Technical Reports Server (NTRS)
Campbell, C. E.
1994-01-01
The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.
Feechan, Angela; Kocsis, Marianna; Riaz, Summaira; Zhang, Wei; Gadoury, David M; Walker, M Andrew; Dry, Ian B; Reisch, Bruce; Cadle-Davidson, Lance
2015-08-01
The Toll/interleukin-1 receptor nucleotide-binding site leucine-rich repeat gene, "resistance to Uncinula necator 1" (RUN1), from Vitis rotundifolia was recently identified and confirmed to confer resistance to the grapevine powdery mildew fungus Erysiphe necator (syn. U. necator) in transgenic V. vinifera cultivars. However, sporulating powdery mildew colonies and cleistothecia of the heterothallic pathogen have been found on introgression lines containing the RUN1 locus growing in New York (NY). Two E. necator isolates collected from RUN1 vines were designated NY1-131 and NY1-137 and were used in this study to inform a strategy for durable RUN1 deployment. In order to achieve this, fitness parameters of NY1-131 and NY1-137 were quantified relative to powdery mildew isolates collected from V. rotundifolia and V. vinifera on vines containing alleles of the powdery mildew resistance genes RUN1, RUN2, or REN2. The results clearly demonstrate the race specificity of RUN1, RUN2, and REN2 resistance alleles, all of which exhibit programmed cell death (PCD)-mediated resistance. The NY1 isolates investigated were found to have an intermediate virulence on RUN1 vines, although this may be allele specific, while the Musc4 isolate collected from V. rotundifolia was virulent on all RUN1 vines. Another powdery mildew resistance locus, RUN2, was previously mapped in different V. rotundifolia genotypes, and two alleles (RUN2.1 and RUN2.2) were identified. The RUN2.1 allele was found to provide PCD-mediated resistance to both an NY1 isolate and Musc4. Importantly, REN2 vines were resistant to the NY1 isolates and RUN1REN2 vines combining both genes displayed additional resistance. Based on these results, RUN1-mediated resistance in grapevine may be enhanced by pyramiding with RUN2.1 or REN2; however, naturally occurring isolates in North America display some virulence on vines with these resistance genes. The characterization of additional resistance sources is needed to identify resistance gene combinations that will further enhance durability. For the resistance gene combinations currently available, we recommend using complementary management strategies, including fungicide application, to reduce populations of virulent isolates.
Program Synthesizes UML Sequence Diagrams
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2006-01-01
A computer program called "Rational Sequence" generates Universal Modeling Language (UML) sequence diagrams of a target Java program running on a Java virtual machine (JVM). Rational Sequence thereby performs a reverse engineering function that aids in the design documentation of the target Java program. Whereas previously, the construction of sequence diagrams was a tedious manual process, Rational Sequence generates UML sequence diagrams automatically from the running Java code.
Comeau, Donald C.; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W. John
2014-01-01
BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net PMID:24935050
MHSS: a material handling system simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomernacki, L.; Hollstien, R.B.
1976-04-07
A Material Handling System Simulator (MHSS) program is described that provides specialized functional blocks for modeling and simulation of nuclear material handling systems. Models of nuclear fuel fabrication plants may be built using functional blocks that simulate material receiving, storage, transport, inventory, processing, and shipping operations as well as the control and reporting tasks of operators or on-line computers. Blocks are also provided that allow the user to observe and gather statistical information on the dynamic behavior of simulated plants over single or replicated runs. Although it is currently being developed for the nuclear materials handling application, MHSS can bemore » adapted to other industries in which material accountability is important. In this paper, emphasis is on the simulation methodology of the MHSS program with application to the nuclear material safeguards problem. (auth)« less
Framework for architecture-independent run-time reconfigurable applications
NASA Astrophysics Data System (ADS)
Lehn, David I.; Hudson, Rhett D.; Athanas, Peter M.
2000-10-01
Configurable Computing Machines (CCMs) have emerged as a technology with the computational benefits of custom ASICs as well as the flexibility and reconfigurability of general-purpose microprocessors. Significant effort from the research community has focused on techniques to move this reconfigurability from a rapid application development tool to a run-time tool. This requires the ability to change the hardware design while the application is executing and is known as Run-Time Reconfiguration (RTR). Widespread acceptance of run-time reconfigurable custom computing depends upon the existence of high-level automated design tools. Such tools must reduce the designers effort to port applications between different platforms as the architecture, hardware, and software evolves. A Java implementation of a high-level application framework, called Janus, is presented here. In this environment, developers create Java classes that describe the structural behavior of an application. The framework allows hardware and software modules to be freely mixed and interchanged. A compilation phase of the development process analyzes the structure of the application and adapts it to the target platform. Janus is capable of structuring the run-time behavior of an application to take advantage of the memory and computational resources available.
Modal analysis and dynamic stresses for acoustically excited shuttle insulation tiles
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Ogilvie, P. L.
1975-01-01
Improvements and extensions to the RESIST computer program developed for determining the normalized modal stress response of shuttle insulation tiles are described. The new version of RESIST can accommodate primary structure panels with closed-cell stringers, in addition to the capability for treating open-cell stringers. In addition, the present version of RESIST numerically solves vibration problems several times faster than its predecessor. A new digital computer program, titled ARREST (Acoustic Response of Reusable Shuttle Tiles) is also described. Starting with modal information contained on output tapes from RESIST computer runs, ARREST determines RMS stresses, deflections and accelerations of shuttle panels with reusable surface insulation tiles. Both programs are applicable to stringer stiffened structural panels with or without reusable surface insulation titles.
Test Generator for MATLAB Simulations
NASA Technical Reports Server (NTRS)
Henry, Joel
2011-01-01
MATLAB Automated Test Tool, version 3.0 (MATT 3.0) is a software package that provides automated tools that reduce the time needed for extensive testing of simulation models that have been constructed in the MATLAB programming language by use of the Simulink and Real-Time Workshop programs. MATT 3.0 runs on top of the MATLAB engine application-program interface to communicate with the Simulink engine. MATT 3.0 automatically generates source code from the models, generates custom input data for testing both the models and the source code, and generates graphs and other presentations that facilitate comparison of the outputs of the models and the source code for the same input data. Context-sensitive and fully searchable help is provided in HyperText Markup Language (HTML) format.
Implementation of a multi-threaded framework for large-scale scientific applications
Sexton-Kennedy, E.; Gartung, Patrick; Jones, C. D.; ...
2015-05-22
The CMS experiment has recently completed the development of a multi-threaded capable application framework. In this paper, we will discuss the design, implementation and application of this framework to production applications in CMS. For the 2015 LHC run, this functionality is particularly critical for both our online and offline production applications, which depend on faster turn-around times and a reduced memory footprint relative to before. These applications are complex codes, each including a large number of physics-driven algorithms. While the framework is capable of running a mix of thread-safe and 'legacy' modules, algorithms running in our production applications need tomore » be thread-safe for optimal use of this multi-threaded framework at a large scale. Towards this end, we discuss the types of changes, which were necessary for our algorithms to achieve good performance of our multithreaded applications in a full-scale application. Lastly performance numbers for what has been achieved for the 2015 run are presented.« less
The Katydid system for compiling KEE applications to Ada
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Bock, Conrad; Feldman, Roy
1990-01-01
Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.
Pánek, J; Vohradský, J
1997-06-01
The principal motivation was to design an environment for the development of image-analysis applications which would allow the integration of independent modules into one frame and make available tools for their build-up, running, management and mutual communication. The system was designed as modular, consisting of the core and work modules. The system core focuses on overall management and provides a library of classes for build-up of the work modules, their user interface and data communication. The work modules carry practical implementation of algorithms and data structures for the solution of a particular problem, and were implemented as dynamic-link libraries. They are mutually independent and run as individual threads, communicating with each other via a unified mechanism. The environment was designed to simplify the development and testing of new algorithms or applications. An example of implementation for the particular problem of the analysis of two-dimensional (2D) gel electrophoretograms is presented. The environment was designed for the Windows NT operating system with the use of Microsoft Foundation Class Library employing the possibilities of C++ programming language. Available on request from the authors.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seyong; Vetter, Jeffrey S
Computer architecture experts expect that non-volatile memory (NVM) hierarchies will play a more significant role in future systems including mobile, enterprise, and HPC architectures. With this expectation in mind, we present NVL-C: a novel programming system that facilitates the efficient and correct programming of NVM main memory systems. The NVL-C programming abstraction extends C with a small set of intuitive language features that target NVM main memory, and can be combined directly with traditional C memory model features for DRAM. We have designed these new features to enable compiler analyses and run-time checks that can improve performance and guard againstmore » a number of subtle programming errors, which, when left uncorrected, can corrupt NVM-stored data. Moreover, to enable recovery of data across application or system failures, these NVL-C features include a flexible directive for specifying NVM transactions. So that our implementation might be extended to other compiler front ends and languages, the majority of our compiler analyses are implemented in an extended version of LLVM's intermediate representation (LLVM IR). We evaluate NVL-C on a number of applications to show its flexibility, performance, and correctness.« less
Physics Incubator at Kansas State University
NASA Astrophysics Data System (ADS)
Flanders, Bret; Chakrabarti, Amitabha
Funded by a major private endowment, the physics department at Kansas State University has recently started a physics incubator program that provides support to research projects with a high probability of commercial application. Some examples of these projects will be discussed in this talk. In a parallel effort, undergraduate physics majors and graduate students are being encouraged to work with our business school to earn an Entrepreneurship minor and a certification in Entrepreneurship. We will discuss how these efforts are promoting a ``culture change'' in the department. We will also discuss the advantages and the difficulties in running such a program in a Midwest college town.
The PyRosetta Toolkit: a graphical user interface for the Rosetta software suite.
Adolf-Bryfogle, Jared; Dunbrack, Roland L
2013-01-01
The Rosetta Molecular Modeling suite is a command-line-only collection of applications that enable high-resolution modeling and design of proteins and other molecules. Although extremely useful, Rosetta can be difficult to learn for scientists with little computational or programming experience. To that end, we have created a Graphical User Interface (GUI) for Rosetta, called the PyRosetta Toolkit, for creating and running protocols in Rosetta for common molecular modeling and protein design tasks and for analyzing the results of Rosetta calculations. The program is highly extensible so that developers can add new protocols and analysis tools to the PyRosetta Toolkit GUI.
Physical Activity and Energy Expenditure during an After-School Running Club: Laps versus Game Play
ERIC Educational Resources Information Center
Kahan, David; McKenzie, Thomas L.
2018-01-01
Background: After-school programs (ASPs) have the potential to contribute to student physical activity (PA), but there is limited empirical evidence to guide program development and implementation. Methods: We used pedometry to assess the overall effectiveness of an elementary school ASP running program relative to national and state PA…
The Long-Run Effect of a Tax-Rebate Program
ERIC Educational Resources Information Center
Wang, Yuntong; Kasper, Hirschel
2007-01-01
In each period of a dynamic tax-rebate program, a (fixed) quantity tax is imposed on each unit of a given good, and the tax revenue is rebated back to the consumer in the next period. The program lasts for infinite number of periods. The author considers a representative consumer's dynamic consumption behavior, the long-run steady-state…
World Perspective Case Descriptions on Educational Programs for Adults: Hong Kong.
ERIC Educational Resources Information Center
Mak, Grace
Adult basic education (ABE) in Hong Kong includes mostly basic Chinese, but also some arithmetic and English. The emphasis is on teaching learners life skills. Both government-run programs and partially government-subsidized programs run by voluntary agencies such as Caritas and the YMCA are common. A case study was made of the Caritas ABE Centre…
Automating spectral measurements
NASA Astrophysics Data System (ADS)
Goldstein, Fred T.
2008-09-01
This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.
PYROLASER - PYROLASER OPTICAL PYROMETER OPERATING SYSTEM
NASA Technical Reports Server (NTRS)
Roberts, F. E.
1994-01-01
The PYROLASER package is an operating system for the Pyrometer Instrument Company's Pyrolaser. There are 6 individual programs in the PYROLASER package: two main programs, two lower level subprograms, and two programs which, although independent, function predominantly as macros. The package provides a quick and easy way to setup, control, and program a standard Pyrolaser. Temperature and emissivity measurements may be either collected as if the Pyrolaser were in the manual operations mode, or displayed on real time strip charts and stored in standard spreadsheet format for post-test analysis. A shell is supplied to allow macros, which are test-specific, to be easily added to the system. The Pyrolaser Simple Operation program provides full on-screen remote operation capabilities, thus allowing the user to operate the Pyrolaser from the computer just as it would be operated manually. The Pyrolaser Simple Operation program also allows the use of "quick starts". Quick starts provide an easy way to permit routines to be used as setup macros for specific applications or tests. The specific procedures required for a test may be ordered in a sequence structure and then the sequence structure can be started with a simple button in the cluster structure provided. One quick start macro is provided for continuous Pyrolaser operation. A subprogram, Display Continuous Pyr Data, is used to display and store the resulting data output. Using this macro, the system is set up for continuous operation and the subprogram is called to display the data in real time on strip charts. The data is simultaneously stored in a spreadsheet format. The resulting spreadsheet file can be opened in any one of a number of commercially available spreadsheet programs. The Read Continuous Pyrometer program is provided as a continuously run subprogram for incorporation of the Pyrolaser software into a process control or feedback control scheme in a multi-component system. The program requires the Pyrolaser to be set up using the Pyrometer String Transfer macro. It requires no inputs and provides temperature and emissivity as outputs. The Read Continuous Pyrometer program can be run continuously and the data can be sampled as often or as seldom as updates of temperature and emissivity are required. PYROLASER is written using the Labview software for use on Macintosh series computers running System 6.0.3 or later, Sun Sparc series computers running OpenWindows 3.0 or MIT's X Window System (X11R4 or X11R5), and IBM PC or compatibles running Microsoft Windows 3.1 or later. Labview requires a minimum of 5Mb of RAM on a Macintosh, 24Mb of RAM on a Sun, and 8Mb of RAM on an IBM PC or compatible. The Labview software is a product of National Instruments (Austin,TX; 800-433-3488), and is not included with this program. The standard distribution medium for PYROLASER is a 3.5 inch 800K Macintosh format diskette. It is also available on a 3.5 inch 720K MS-DOS format diskette, a 3.5 inch diskette in UNIX tar format, and a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation in Macintosh WordPerfect version 2.0.4 format is included on the distribution medium. Printed documentation is included in the price of the program. PYROLASER was developed in 1992.
Astronaut John Glenn running as part of physical training program
NASA Technical Reports Server (NTRS)
1964-01-01
Astronaut John H. Glenn Jr., pilot of the Mercury-Atlas 6 mission, participates in a strict physical training program, as he exemplifies by frequent running. Here he pauses during an exercise period on the beach near Cape Canaveral, Florida.
Ferrauti, Alexander; Bergermann, Matthias; Fernandez-Fernandez, Jaime
2010-10-01
The purpose of this study was to investigate the effects of a concurrent strength and endurance training program on running performance and running economy of middle-aged runners during their marathon preparation. Twenty-two (8 women and 14 men) recreational runners (mean ± SD: age 40.0 ± 11.7 years; body mass index 22.6 ± 2.1 kg·m⁻²) were separated into 2 groups (n = 11; combined endurance running and strength training program [ES]: 9 men, 2 women and endurance running [E]: 7 men, and 4 women). Both completed an 8-week intervention period that consisted of either endurance training (E: 276 ± 108 minute running per week) or a combined endurance and strength training program (ES: 240 ± 121-minute running plus 2 strength training sessions per week [120 minutes]). Strength training was focused on trunk (strength endurance program) and leg muscles (high-intensity program). Before and after the intervention, subjects completed an incremental treadmill run and maximal isometric strength tests. The initial values for VO2peak (ES: 52.0 ± 6.1 vs. E: 51.1 ± 7.5 ml·kg⁻¹·min⁻¹) and anaerobic threshold (ES: 3.5 ± 0.4 vs. E: 3.4 ± 0.5 m·s⁻¹) were identical in both groups. A significant time × intervention effect was found for maximal isometric force of knee extension (ES: from 4.6 ± 1.4 to 6.2 ± 1.0 N·kg⁻¹, p < 0.01), whereas no changes in body mass occurred. No significant differences between the groups and no significant interaction (time × intervention) were found for VO2 (absolute and relative to VO2peak) at defined marathon running velocities (2.4 and 2.8 m·s⁻¹) and submaximal blood lactate thresholds (2.0, 3.0, and 4.0 mmol·L⁻¹). Stride length and stride frequency also remained unchanged. The results suggest no benefits of an 8-week concurrent strength training for running economy and coordination of recreational marathon runners despite a clear improvement in leg strength, maybe because of an insufficient sample size or a short intervention period.
A wirelessly programmable actuation and sensing system for structural health monitoring
NASA Astrophysics Data System (ADS)
Long, James; Büyüköztürk, Oral
2016-04-01
Wireless sensor networks promise to deliver low cost, low power and massively distributed systems for structural health monitoring. A key component of these systems, particularly when sampling rates are high, is the capability to process data within the network. Although progress has been made towards this vision, it remains a difficult task to develop and program 'smart' wireless sensing applications. In this paper we present a system which allows data acquisition and computational tasks to be specified in Python, a high level programming language, and executed within the sensor network. Key features of this system include the ability to execute custom application code without firmware updates, to run multiple users' requests concurrently and to conserve power through adjustable sleep settings. Specific examples of sensor node tasks are given to demonstrate the features of this system in the context of structural health monitoring. The system comprises of individual firmware for nodes in the wireless sensor network, and a gateway server and web application through which users can remotely submit their requests.
Home Energy Management System - VOLTTRON Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zandi, Helia
In most Home Energy Management Systems (HEMS) available in the market, different devices running different communication protocols cannot interact with each other and exchange information. As a result of this integration, the information about different devices running different communication protocol can be accessible by other agents and devices running on VOLTTRON platform. The integration process can be used by any HEMS available in the market regardless of the programming language they use. If the existing HEMS provides an Application Programming Interface (API) based on the RESTFul architecture, that API can be used for integration. Our candidate HEMS in this projectmore » is home-assistant (Hass). An agent is implemented which can communicate with the Hass API and receives information about the devices loaded on the API. The agent publishes the information it receives on the VOLTTRON message bus so other agents can have access to this information. On the other side, for each type of devices, an agent is implemented such as Climate Agent, Lock Agent, Switch Agent, Light Agent, etc. Each of these agents is subscribed to the messages published on the message bus about their associated devices. These agents can also change the status of the devices by sending appropriate service calls to the API. Other agents and services on the platform can also access this information and coordinate their decision-making process based on this information.« less
An Evaluation of an Ada Implementation of the Rete Algorithm for Embedded Flight Processors
1990-12-01
computers was desired. The VAX VMS operating system has many built-in methods for determining program performance (including VAX PCA), but these methods... overviev , of the target environment-- the MIL-STD-1750A VHSIC Avionic Modular Processor ( VA.IP, running under the Ada Avionics Real-Time Software (AARTS... computers . Mil-STD-1750A, the Air Force’s standard flight computer architecture, however, places severe constraints on applications software processing
Teleoperation experiments with a Utah/MIT hand and a VPL DataGlove
NASA Technical Reports Server (NTRS)
Clark, D.; Demmel, J.; Hong, J.; Lafferriere, Gerardo; Salkind, L.; Tan, X.
1989-01-01
A teleoperation system capable of controlling a Utah/MIT Dextrous Hand using a VPL DataGlove as a master is presented. Additionally the system is capable of running the dextrous hand in robotic (autonomous) mode as new programs are developed. The software and hardware architecture used is presented and the experiments performed are described. The communication and calibration issues involved are analyzed and applications to the analysis and development of automated dextrous manipulations are investigated.
2017-03-23
performance computing resources made available by the US Department of Defense High Performance Computing Modernization Program at the Air Force...1Department of Defense Biotechnology High Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, United...States Army Medical Research and Materiel Command, Fort Detrick, Maryland, USA Full list of author information is available at the end of the article
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Buhl, Fred; Haves, Philip
2008-09-20
EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences,more » identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.« less
RAPPORT: running scientific high-performance computing applications on the cloud.
Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt
2013-01-28
Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.
The Caltech Concurrent Computation Program - Project description
NASA Technical Reports Server (NTRS)
Fox, G.; Otto, S.; Lyzenga, G.; Rogstad, D.
1985-01-01
The Caltech Concurrent Computation Program wwhich studies basic issues in computational science is described. The research builds on initial work where novel concurrent hardware, the necessary systems software to use it and twenty significant scientific implementations running on the initial 32, 64, and 128 node hypercube machines have been constructed. A major goal of the program will be to extend this work into new disciplines and more complex algorithms including general packages that decompose arbitrary problems in major application areas. New high-performance concurrent processors with up to 1024-nodes, over a gigabyte of memory and multigigaflop performance are being constructed. The implementations cover a wide range of problems in areas such as high energy and astrophysics, condensed matter, chemical reactions, plasma physics, applied mathematics, geophysics, simulation, CAD for VLSI, graphics and image processing. The products of the research program include the concurrent algorithms, hardware, systems software, and complete program implementations.
A simulation model for wind energy storage systems. Volume 1: Technical report
NASA Technical Reports Server (NTRS)
Warren, A. W.; Edsinger, R. W.; Chan, Y. K.
1977-01-01
A comprehensive computer program for the modeling of wind energy and storage systems utilizing any combination of five types of storage (pumped hydro, battery, thermal, flywheel and pneumatic) was developed. The level of detail of Simulation Model for Wind Energy Storage (SIMWEST) is consistent with a role of evaluating the economic feasibility as well as the general performance of wind energy systems. The software package consists of two basic programs and a library of system, environmental, and load components. The first program is a precompiler which generates computer models (in FORTRAN) of complex wind source storage application systems, from user specifications using the respective library components. The second program provides the techno-economic system analysis with the respective I/O, the integration of systems dynamics, and the iteration for conveyance of variables. SIMWEST program, as described, runs on the UNIVAC 1100 series computers.
TIERRAS: A package to simulate high energy cosmic ray showers underground, underwater and under-ice
NASA Astrophysics Data System (ADS)
Tueros, Matías; Sciutto, Sergio
2010-02-01
In this paper we present TIERRAS, a Monte Carlo simulation program based on the well-known AIRES air shower simulations system that enables the propagation of particle cascades underground, providing a tool to study particles arriving underground from a primary cosmic ray on the atmosphere or to initiate cascades directly underground and propagate them, exiting into the atmosphere if necessary. We show several cross-checks of its results against CORSIKA, FLUKA, GEANT and ZHS simulations and we make some considerations regarding its possible use and limitations. The first results of full underground shower simulations are presented, as an example of the package capabilities. Program summaryProgram title: TIERRAS for AIRES Catalogue identifier: AEFO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 489 No. of bytes in distributed program, including test data, etc.: 3 261 669 Distribution format: tar.gz Programming language: Fortran 77 and C Computer: PC, Alpha, IBM, HP, Silicon Graphics and Sun workstations Operating system: Linux, DEC Unix, AIX, SunOS, Unix System V RAM: 22 Mb bytes Classification: 1.1 External routines: TIERRAS requires AIRES 2.8.4 to be installed on the system. AIRES 2.8.4 can be downloaded from http://www.fisica.unlp.edu.ar/auger/aires/eg_AiresDownload.html. Nature of problem: Simulation of high and ultra high energy underground particle showers. Solution method: Modification of the AIRES 2.8.4 code to accommodate underground conditions. Restrictions: In AIRES some processes that are not statistically significant on the atmosphere are not simulated. In particular, it does not include muon photonuclear processes. This imposes a limitation on the application of this package to a depth of 1 km of standard rock (or 2.5 km of water equivalent). Neutrinos are not tracked on the simulation, but their energy is taken into account in decays. Running time: A TIERRAS for AIRES run of a 10 eV shower with statistical sampling (thinning) below 10 eV and 0.2 weight factor (see [1]) uses approximately 1 h of CPU time on an Intel Core 2 Quad Q6600 at 2.4 GHz. It uses only one core, so 4 simultaneous simulations can be run on this computer. Aires includes a spooling system to run several simultaneous jobs of any type. References:S. Sciutto, AIRES 2.6 User Manual, http://www.fisica.unlp.edu.ar/auger/aires/.
CoMD Implementation Suite in Emerging Programming Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haque, Riyaz; Reeve, Sam; Juallmes, Luc
CoMD-Em is a software implementation suite of the CoMD [4] proxy app using different emerging programming models. It is intended to analyze the features and capabilities of novel programming models that could help ensure code and performance portability and scalability across heterogeneous platforms while improving programmer productivity. Another goal is to provide the authors and venders with some meaningful feedback regarding the capabilities and limitations of their models. The actual application is a classical molecular dynamics (MD) simulation using either the Lennard-Jones method (LJ) or the embedded atom method (EAM) for primary particle interaction. The code can be extended tomore » support alternate interaction models. The code is expected ro run on a wide class of heterogeneous hardware configurations like shard/distributed/hybrid memory, GPU's and any other platform supported by the underlying programming model.« less
Enhanced TCAS 2/CDTI traffic Sensor digital simulation model and program description
NASA Technical Reports Server (NTRS)
Goka, T.
1984-01-01
Digital simulation models of enhanced TCAS 2/CDTI traffic sensors are developed, based on actual or projected operational and performance characteristics. Two enhanced Traffic (or Threat) Alert and Collision Avoidance Systems are considered. A digital simulation program is developed in FORTRAN. The program contains an executive with a semireal time batch processing capability. The simulation program can be interfaced with other modules with a minimum requirement. Both the traffic sensor and CAS logic modules are validated by means of extensive simulation runs. Selected validation cases are discussed in detail, and capabilities and limitations of the actual and simulated systems are noted. The TCAS systems are not specifically intended for Cockpit Display of Traffic Information (CDTI) applications. These systems are sufficiently general to allow implementation of CDTI functions within the real systems' constraints.
Graphical user interface for image acquisition and processing
Goldberg, Kenneth A.
2002-01-01
An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.
Peterson, Erica L; McGlothlin, James D; Blue, Carolyn L
2004-01-01
Nursing assistants (NAs) who work in nursing and personal care facilities are twice and five times more likely, respectively, to suffer a musculoskeletal disorder compared to service industries and other health care facilities, respectively. The purpose of this study was to develop an ergonomics training program for selected NAs at a state-run veterans' home to decrease musculoskeletal disorders by 1) developing questionnaires to assess musculoskeletal stress, 2) evaluating the work environment, 3) developing and using a training package, and 4) determining the application of the information from the training package by NAs on the floor. Results show two new risk factors not previously identified for nursing personnel in the peer-reviewed literature. Quizzes given to the nursing personnel before and after training indicated a significant improvement in understanding the principles of ergonomics and patient-handling techniques. Statistical analysis comparing the pre-training and post-training questionnaires indicated no significant decrease in musculoskeletal risk factors and no significant reduction in pain or discomfort or overall mental or physical health.
Kranc: a Mathematica package to generate numerical codes for tensorial evolution equations
NASA Astrophysics Data System (ADS)
Husa, Sascha; Hinder, Ian; Lechner, Christiane
2006-06-01
We present a suite of Mathematica-based computer-algebra packages, termed "Kranc", which comprise a toolbox to convert certain (tensorial) systems of partial differential evolution equations to parallelized C or Fortran code for solving initial boundary value problems. Kranc can be used as a "rapid prototyping" system for physicists or mathematicians handling very complicated systems of partial differential equations, but through integration into the Cactus computational toolkit we can also produce efficient parallelized production codes. Our work is motivated by the field of numerical relativity, where Kranc is used as a research tool by the authors. In this paper we describe the design and implementation of both the Mathematica packages and the resulting code, we discuss some example applications, and provide results on the performance of an example numerical code for the Einstein equations. Program summaryTitle of program: Kranc Catalogue identifier: ADXS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXS_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computer for which the program is designed and others on which it has been tested: General computers which run Mathematica (for code generation) and Cactus (for numerical simulations), tested under Linux Programming language used: Mathematica, C, Fortran 90 Memory required to execute with typical data: This depends on the number of variables and gridsize, the included ADM example requires 4308 KB Has the code been vectorized or parallelized: The code is parallelized based on the Cactus framework. Number of bytes in distributed program, including test data, etc.: 1 578 142 Number of lines in distributed program, including test data, etc.: 11 711 Nature of physical problem: Solution of partial differential equations in three space dimensions, which are formulated as an initial value problem. In particular, the program is geared towards handling very complex tensorial equations as they appear, e.g., in numerical relativity. The worked out examples comprise the Klein-Gordon equations, the Maxwell equations, and the ADM formulation of the Einstein equations. Method of solution: The method of numerical solution is finite differencing and method of lines time integration, the numerical code is generated through a high level Mathematica interface. Restrictions on the complexity of the program: Typical numerical relativity applications will contain up to several dozen evolution variables and thousands of source terms, Cactus applications have shown scaling up to several thousand processors and grid sizes exceeding 500 3. Typical running time: This depends on the number of variables and the grid size: the included ADM example takes approximately 100 seconds on a 1600 MHz Intel Pentium M processor. Unusual features of the program: based on Mathematica and Cactus
NASA Technical Reports Server (NTRS)
Lopez, Isaac; Follen, Gregory J.; Gutierrez, Richard; Foster, Ian; Ginsburg, Brian; Larsson, Olle; Martin, Stuart; Tuecke, Steven; Woodford, David
2000-01-01
This paper describes a project to evaluate the feasibility of combining Grid and Numerical Propulsion System Simulation (NPSS) technologies, with a view to leveraging the numerous advantages of commodity technologies in a high-performance Grid environment. A team from the NASA Glenn Research Center and Argonne National Laboratory has been studying three problems: a desktop-controlled parameter study using Excel (Microsoft Corporation); a multicomponent application using ADPAC, NPSS, and a controller program-, and an aviation safety application running about 100 jobs in near real time. The team has successfully demonstrated (1) a Common-Object- Request-Broker-Architecture- (CORBA-) to-Globus resource manager gateway that allows CORBA remote procedure calls to be used to control the submission and execution of programs on workstations and massively parallel computers, (2) a gateway from the CORBA Trader service to the Grid information service, and (3) a preliminary integration of CORBA and Grid security mechanisms. We have applied these technologies to two applications related to NPSS, namely a parameter study and a multicomponent simulation.
A comparison of five benchmarks
NASA Technical Reports Server (NTRS)
Huss, Janice E.; Pennline, James A.
1987-01-01
Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.
PC-CUBE: A Personal Computer Based Hypercube
NASA Technical Reports Server (NTRS)
Ho, Alex; Fox, Geoffrey; Walker, David; Snyder, Scott; Chang, Douglas; Chen, Stanley; Breaden, Matt; Cole, Terry
1988-01-01
PC-CUBE is an ensemble of IBM PCs or close compatibles connected in the hypercube topology with ordinary computer cables. Communication occurs at the rate of 115.2 K-band via the RS-232 serial links. Available for PC-CUBE is the Crystalline Operating System III (CrOS III), Mercury Operating System, CUBIX and PLOTIX which are parallel I/O and graphics libraries. A CrOS performance monitor was developed to facilitate the measurement of communication and computation time of a program and their effects on performance. Also available are CXLISP, a parallel version of the XLISP interpreter; GRAFIX, some graphics routines for the EGA and CGA; and a general execution profiler for determining execution time spent by program subroutines. PC-CUBE provides a programming environment similar to all hypercube systems running CrOS III, Mercury and CUBIX. In addition, every node (personal computer) has its own graphics display monitor and storage devices. These allow data to be displayed or stored at every processor, which has much instructional value and enables easier debugging of applications. Some application programs which are taken from the book Solving Problems on Concurrent Processors (Fox 88) were implemented with graphics enhancement on PC-CUBE. The applications range from solving the Mandelbrot set, Laplace equation, wave equation, long range force interaction, to WaTor, an ecological simulation.
Introduction of Virtualization Technology to Multi-Process Model Checking
NASA Technical Reports Server (NTRS)
Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu
2009-01-01
Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faculjak, D.A.
1988-03-01
Graphics Manager (GFXMGR) is menu-driven, user-friendly software designed to interactively create, edit, and delete graphics displays on the Advanced Electronics Design (AED) graphics controller, Model 767. The software runs on the VAX family of computers and has been used successfully in security applications to create and change site layouts (maps) of specific facilities. GFXMGR greatly benefits graphics development by minimizing display-development time, reducing tedium on the part of the user, and improving system performance. It is anticipated that GFXMGR can be used to create graphics displays for many types of applications. 8 figs., 2 tabs.
Deployment of Directory Service for IEEE N Bus Test System Information
NASA Astrophysics Data System (ADS)
Barman, Amal; Sil, Jaya
2008-10-01
Exchanging information over Internet and Intranet becomes a defacto standard in computer applications, among various users and organizations. Distributed system study, e-governance etc require transparent information exchange between applications, constituencies, manufacturers, and vendors. To serve these purposes database system is needed for storing system data and other relevant information. Directory service, which is a specialized database along with access protocol, could be the single solution since it runs over TCP/IP, supported by all POSIX compliance platforms and is based on open standard. This paper describes a way to deploy directory service, to store IEEE n bus test system data and integrating load flow program with it.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-17
... with the project. Applicant Contact: Daniel R. Irvin, Free Flow Power Corporation, 33 Commercial Street... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 13876-000] South Run Pumped... the Federal Power Act (FPA), proposing to study the feasibility of the South Run Pumped Storage...
NASA Technical Reports Server (NTRS)
Boytos, Matthew A.; Norbury, John W.
1992-01-01
The authors of this paper have provided a set of ready-to-run FORTRAN programs that should be useful in the field of theoretical nuclear physics. The purpose of this document is to provide a simple synopsis of the programs and their use. A separate section is devoted to each program set and includes: abstract; files; compiling, linking, and running; obtaining results; and a tutorial.
NASA Astrophysics Data System (ADS)
van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten
2009-12-01
We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the underlying hardware towards heterogeneity and parallelism. This is particularly relevant for data-intensive problems stemming from discretisations with local support, such as finite differences, volumes and elements. Solution method: To address these issues, we present a hardware aware collection of libraries combining the advantages of modern software techniques and hardware oriented programming. Applications built on top of these libraries can be configured trivially to execute on CPUs, GPUs or the Cell processor. In order to evaluate the performance and accuracy of our approach, we provide two domain specific applications; a multigrid solver for the Poisson problem and a fully explicit solver for 2D shallow water equations. Restrictions: HONEI is actively being developed, and its feature list is continuously expanded. Not all combinations of operations and architectures might be supported in earlier versions of the code. Obtaining snapshots from http://www.honei.org is recommended. Unusual features: The considered applications as well as all library operations can be run on NVIDIA GPUs and the Cell BE. Running time: Depending on the application, and the input sizes. The Poisson solver executes in few seconds, while the SWE solver requires up to 5 minutes for large spatial discretisations or small timesteps. References:http://www.nvidia.com/cuda. http://www.ibm.com/developerworks/power/cell.
Using Achievement Goal Theory to Assess an Elementary Physical Education Running Program
ERIC Educational Resources Information Center
Xiang, Ping; Bruene, April McBride, Ron E.
2004-01-01
Using Achievement Goal Theory as a theoretical framework, this study examined an elementary physical education running program called Roadrunners and assessed relationships among achievement goals, perceived motivational climate, and student achievement behavior. Roadrunners promotes cardiovascular health, physical active lifestyles, and mastery…
ORNL Cray X1 evaluation status report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agarwal, P.K.; Alexander, R.A.; Apra, E.
2004-05-01
On August 15, 2002 the Department of Energy (DOE) selected the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) to deploy a new scalable vector supercomputer architecture for solving important scientific problems in climate, fusion, biology, nanoscale materials and astrophysics. ''This program is one of the first steps in an initiative designed to provide U.S. scientists with the computational power that is essential to 21st century scientific leadership,'' said Dr. Raymond L. Orbach, director of the department's Office of Science. In FY03, CCS procured a 256-processor Cray X1 to evaluate the processors, memory subsystem, scalability of themore » architecture, software environment and to predict the expected sustained performance on key DOE applications codes. The results of the micro-benchmarks and kernel bench marks show the architecture of the Cray X1 to be exceptionally fast for most operations. The best results are shown on large problems, where it is not possible to fit the entire problem into the cache of the processors. These large problems are exactly the types of problems that are important for the DOE and ultra-scale simulation. Application performance is found to be markedly improved by this architecture: - Large-scale simulations of high-temperature superconductors run 25 times faster than on an IBM Power4 cluster using the same number of processors. - Best performance of the parallel ocean program (POP v1.4.3) is 50 percent higher than on Japan s Earth Simulator and 5 times higher than on an IBM Power4 cluster. - A fusion application, global GYRO transport, was found to be 16 times faster on the X1 than on an IBM Power3. The increased performance allowed simulations to fully resolve questions raised by a prior study. - The transport kernel in the AGILE-BOLTZTRAN astrophysics code runs 15 times faster than on an IBM Power4 cluster using the same number of processors. - Molecular dynamics simulations related to the phenomenon of photon echo run 8 times faster than previously achieved. Even at 256 processors, the Cray X1 system is already outperforming other supercomputers with thousands of processors for a certain class of applications such as climate modeling and some fusion applications. This evaluation is the outcome of a number of meetings with both high-performance computing (HPC) system vendors and application experts over the past 9 months and has received broad-based support from the scientific community and other agencies.« less
uPy: a ubiquitous CG Python API with biological-modeling applications.
Autin, Ludovic; Johnson, Graham; Hake, Johan; Olson, Arthur; Sanner, Michel
2012-01-01
The uPy Python extension module provides a uniform abstraction of the APIs of several 3D computer graphics programs (called hosts), including Blender, Maya, Cinema 4D, and DejaVu. A plug-in written with uPy can run in all uPy-supported hosts. Using uPy, researchers have created complex plug-ins for molecular and cellular modeling and visualization. uPy can simplify programming for many types of projects (not solely science applications) intended for multihost distribution. It's available at http://upy.scripps.edu. The first featured Web extra is a video that shows interactive analysis of a calcium dynamics simulation. YouTube URL: http://youtu.be/wvs-nWE6ypo. The second featured Web extra is a video that shows rotation of the HIV virus. YouTube URL: http://youtu.be/vEOybMaRoKc.
Comeau, Donald C; Liu, Haibin; Islamaj Doğan, Rezarta; Wilbur, W John
2014-01-01
BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC natural language preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford natural language processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net. © The Author(s) 2014. Published by Oxford University Press.
Cartwright, William S
2008-04-01
Researchers have been at the forefront of applying new costing methods to drug abuse treatment programs and innovations. The motivation for such work has been to improve costing accuracy. Recent work has seen applications initiated in establishing charts of account and cost accounting for service delivery. As a result, researchers now have available five methods to apply to the costing of drug abuse treatment programs. In all areas of costing, there is room for more research on costing concepts and measurement applications. Additional work would be useful in establishing studies with activity-based costing for both research and managerial purposes. Studies of economies of scope are particularly relevant because of the integration of social services and criminal justice in drug abuse treatment. In the long run, managerial initiatives to improve the administration and quality of drug abuse treatment will benefit directly from research with new information on costing techniques.
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
Research on memory management in embedded systems
NASA Astrophysics Data System (ADS)
Huang, Xian-ying; Yang, Wu
2005-12-01
Memory is a scarce resource in embedded system due to cost and size. Thus, applications in embedded systems cannot use memory randomly, such as in desktop applications. However, data and code must be stored into memory for running. The purpose of this paper is to save memory in developing embedded applications and guarantee running under limited memory conditions. Embedded systems often have small memory and are required to run a long time. Thus, a purpose of this study is to construct an allocator that can allocate memory effectively and bear a long-time running situation, reduce memory fragmentation and memory exhaustion. Memory fragmentation and exhaustion are related to the algorithm memory allocated. Static memory allocation cannot produce fragmentation. In this paper it is attempted to find an effective allocation algorithm dynamically, which can reduce memory fragmentation. Data is the critical part that ensures an application can run regularly, which takes up a large amount of memory. The amount of data that can be stored in the same size of memory is relevant with the selected data structure. Skills for designing application data in mobile phone are explained and discussed also.
A web-server of cell type discrimination system.
Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan
2014-01-01
Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.
A Web-Server of Cell Type Discrimination System
Zhong, Yan
2014-01-01
Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
NASA Astrophysics Data System (ADS)
Cha, Moon Hoe
2007-02-01
The NearFar program is a package for carrying out an interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. The program is implemented in Java to perform numerical operations on the nearside and farside angular distributions. It contains a graphical display interface for the numerical results. A test run has been applied to the elastic O16+Si28 scattering at E=1503 MeV. Program summaryTitle of program: NearFar Catalogue identifier: ADYP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYP_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: designed for any machine capable of running Java, developed on PC-Pentium-4 Operating systems under which the program has been tested: Microsoft Windows XP (Home Edition) Program language used: Java Number of bits in a word: 64 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 3484 Number of bytes distributed program, including test data, etc.: 142 051 Distribution format: tar.gz Other software required: A Java runtime interpreter, or the Java Development Kit, version 5.0 Nature of physical problem: Interactive nearside-farside decomposition of heavy-ion elastic scattering amplitude. Method of solution: The user must supply a external data file or PPSM parameters which calculates theoretical values of the quantities to be decomposed. Typical running time: Problem dependent. In a test run, it is about 35 s on a 2.40 GHz Intel P4-processor machine.
Lambda: A Mathematica package for operator product expansions in vertex algebras
NASA Astrophysics Data System (ADS)
Ekstrand, Joel
2011-02-01
We give an introduction to the Mathematica package Lambda, designed for calculating λ-brackets in both vertex algebras, and in SUSY vertex algebras. This is equivalent to calculating operator product expansions in two-dimensional conformal field theory. The syntax of λ-brackets is reviewed, and some simple examples are shown, both in component notation, and in N=1 superfield notation. Program summaryProgram title: Lambda Catalogue identifier: AEHF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 18 087 No. of bytes in distributed program, including test data, etc.: 131 812 Distribution format: tar.gz Programming language: Mathematica Computer: See specifications for running Mathematica V7 or above. Operating system: See specifications for running Mathematica V7 or above. RAM: Varies greatly depending on calculation to be performed. Classification: 4.2, 5, 11.1. Nature of problem: Calculate operator product expansions (OPEs) of composite fields in 2d conformal field theory. Solution method: Implementation of the algebraic formulation of OPEs given by vertex algebras, and especially by λ-brackets. Running time: Varies greatly depending on calculation requested. The example notebook provided takes about 3 s to run.
The Future of Earthquake Relocation Tools
NASA Astrophysics Data System (ADS)
Lecocq, T.; Caudron, C.
2010-12-01
Many scientists around the world use earthquake relocation software for their research. Some use "known" software like HYPODD or COMPLOC, while others use their own algorithms and codes. Often, beginners struggle to get one tool running or to properly configure input parameters. This Poster will be witness of debates that will take place during the Meeting, for example adressing questions like "Which program for which application?" ; "Standardized In/Outs?" , "Tectonic / Volcanic / Other ?" ; "All programs inside one single Super-Package?" ; "Common/Base Bibliography for the Relocation-Beginner?" ; "Continuous or Layered Velocity Model?" etc... We will also present the scheme of a Super-Package we are working on, grouping HYPODD [Waldhauser 2001], COMPLOC [Lin&Shearer 2006], LOTOS [Koulakov 2009] ; allowing standard in/outs for the 3 programs, and thus, the comparison of their outputs.
Using Multi-Objective Genetic Programming to Synthesize Stochastic Processes
NASA Astrophysics Data System (ADS)
Ross, Brian; Imada, Janine
Genetic programming is used to automatically construct stochastic processes written in the stochastic π-calculus. Grammar-guided genetic programming constrains search to useful process algebra structures. The time-series behaviour of a target process is denoted with a suitable selection of statistical feature tests. Feature tests can permit complex process behaviours to be effectively evaluated. However, they must be selected with care, in order to accurately characterize the desired process behaviour. Multi-objective evaluation is shown to be appropriate for this application, since it permits heterogeneous statistical feature tests to reside as independent objectives. Multiple undominated solutions can be saved and evaluated after a run, for determination of those that are most appropriate. Since there can be a vast number of candidate solutions, however, strategies for filtering and analyzing this set are required.
NASA Technical Reports Server (NTRS)
1997-01-01
Kennedy Space Center specialists aided Space, Energy, Time Saving (SETS) Systems, Inc. in working out the problems they encountered with their new electronic "tankless" water heater. The flow switch design suffered intermittent problems. Hiring several testing and engineering firms produced only graphs, printouts, and a large expense, but no solutions. Then through the Kennedy Space Center/State of Florida Technology Outreach Program, SETS was referred to Michael Brooks, a 21-year space program veteran and flowmeter expert. Run throughout Florida to provide technical service to businesses at no cost, the program applies scientific and engineering expertise originally developed for space applications to the Florida business community. Brooks discovered several key problems, resulting in a new design that turned out to be simpler, yielding a 63 percent reduction in labor and material costs over the old design.
System and method for controlling power consumption in a computer system based on user satisfaction
Yang, Lei; Dick, Robert P; Chen, Xi; Memik, Gokhan; Dinda, Peter A; Shy, Alex; Ozisikyilmaz, Berkin; Mallik, Arindam; Choudhary, Alok
2014-04-22
Systems and methods for controlling power consumption in a computer system. For each of a plurality of interactive applications, the method changes a frequency at which a processor of the computer system runs, receives an indication of user satisfaction, determines a relationship between the changed frequency and the user satisfaction of the interactive application, and stores the determined relationship information. The determined relationship can distinguish between different users and different interactive applications. A frequency may be selected from the discrete frequencies at which the processor of the computer system runs based on the determined relationship information for a particular user and a particular interactive application running on the processor of the computer system. The processor may be adapted to run at the selected frequency.
A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth
2005-03-15
The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scalemore » long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK projects have made use of this infrastructure to build performance measurement and analysis tools that scale to long-running programs on large parallel and distributed systems and that automate much of the search for performance bottlenecks.« less
NASA Astrophysics Data System (ADS)
Kim, C. S.; Osborn, J.; Smith, M.
2014-12-01
Effectively recruiting and engaging community college students in STEM research experiences is an increasingly important goal of the NSF but has not historically been the primary focus of most NSF-REU Site programs. The Summer Undergraduate Research Fellowship in Earth and Environmental Sciences (SURFEES) program at Chapman University, a primarily undergraduate institution in Southern California, is the site of the first NSF-REU program in the NSF's Division of Earth Sciences that selects participants exclusively from local partnering community colleges. Building on and now running parallel with a successful internally-funded summer research program already in place and available only to Chapman undergraduates, the SURFEES program incorporates specific mentor and participant pre-experience training, pre-, mid-, and post-assessment instruments, and programming targeted to the earth and environmental sciences as well as to community college students. Perhaps most importantly, the application, selection and pairing of student participants with faculty mentors was conducted with specific goals of identifying those applicants with the greatest potential for a transformative experience while also meeting self-defined targets of under-represented minority, female, and low-income participants. Initial assessment results of the first participant cohort from summer 2014 and lessons learned for creating/adapting an NSF-REU site to involve community college students will be discussed.
Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Evan; Bourassa, Norm; Rainer, Leo
A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.
NASA Astrophysics Data System (ADS)
Bartoletti, Massimo
Usage automata are an extension of finite stata automata, with some additional features (e.g. parameters and guards) that improve their expressivity. Usage automata are expressive enough to model security requirements of real-world applications; at the same time, they are simple enough to be statically amenable, e.g. they can be model-checked against abstractions of program usages. We study here some foundational aspects of usage automata. In particular, we discuss about their expressive power, and about their effective use in run-time mechanisms for enforcing usage policies.
Home Energy Scoring Tools (website) and Application Programming Interfaces, APIs (aka HEScore)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Evan; Bourassa, Norm; Rainer, Leo
2016-04-22
A web-based residential energy rating tool with APIs that runs the LBNL website: Provides customized estimates of residential energy use and energy bills based on building description information provided by the user. Energy use is estimated using engineering models developed at LBNL. Space heating and cooling use is based on the DOE-2. 1E building simulation model. Other end-users (water heating, appliances, lighting, and misc. equipment) are based on engineering models developed by LBNL.
The Snowmelt-Runoff Model (SRM) user's manual
NASA Technical Reports Server (NTRS)
Martinec, J.; Rango, A.; Major, E.
1983-01-01
A manual to provide a means by which a user may apply the snowmelt runoff model (SRM) unaided is presented. Model structure, conditions of application, and data requirements, including remote sensing, are described. Guidance is given for determining various model variables and parameters. Possible sources of error are discussed and conversion of snowmelt runoff model (SRM) from the simulation mode to the operational forecasting mode is explained. A computer program is presented for running SRM is easily adaptable to most systems used by water resources agencies.
A network identity authentication system based on Fingerprint identification technology
NASA Astrophysics Data System (ADS)
Xia, Hong-Bin; Xu, Wen-Bo; Liu, Yuan
2005-10-01
Fingerprint verification is one of the most reliable personal identification methods. However, most of the automatic fingerprint identification system (AFIS) is not run via Internet/Intranet environment to meet today's increasing Electric commerce requirements. This paper describes the design and implementation of the archetype system of identity authentication based on fingerprint biometrics technology, and the system can run via Internet environment. And in our system the COM and ASP technology are used to integrate Fingerprint technology with Web database technology, The Fingerprint image preprocessing algorithms are programmed into COM, which deployed on the internet information server. The system's design and structure are proposed, and the key points are discussed. The prototype system of identity authentication based on Fingerprint have been successfully tested and evaluated on our university's distant education applications in an internet environment.
Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S
2013-01-08
OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.
Eastman, Peter; Friedrichs, Mark S.; Chodera, John D.; Radmer, Randall J.; Bruns, Christopher M.; Ku, Joy P.; Beauchamp, Kyle A.; Lane, Thomas J.; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R.; Pande, Vijay S.
2012-01-01
OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added. PMID:23316124
History of Satellite Orbit Determination at NSWCDD
2018-01-31
run . Segment 40 did pass editing and its use was optional after Segment 20. Segment 30 needed to be run before Segment 80. Segment 70 was run as...control cards required to run the program. These included a CHARGE card related to usage charges and various REQUEST, ATTACH, and CATALOG cards...each) could be done in a single run after the long-arc solution had converged. These short arcs used the pass matrices from the long-arc run in their
Factors Influencing Running-Related Musculoskeletal Injury Risk Among U.S. Military Recruits.
Molloy, Joseph M
2016-06-01
Running-related musculoskeletal injuries among U.S. military recruits negatively impact military readiness. Low aerobic fitness, prior injury, and weekly running distance are known risk factors. Physical fitness screening and remedial physical training (or discharging the most poorly fit recruits) before entry-level military training have tended to reduce injury rates while decreasing attrition, training, and medical costs. Incorporating anaerobic running sessions into training programs can offset decreased weekly running distance and decrease injury risk. Varying lower extremity loading patterns, stride length or cadence manipulation, and hip stability/strengthening programming may further decrease injury risk. No footstrike pattern is ideal for all runners; transitioning to forefoot striking may reduce risk for hip, knee, or tibial injuries, but increase risk for calf, Achilles, foot or ankle injuries. Minimal evidence associates running surfaces with injury risk. Footwear interventions should focus on proper fit and comfort; the evidence does not support running shoe prescription per foot type to reduce injury risk among recruits. Primary injury mitigation efforts should focus on physical fitness screening, remedial physical training (or discharge for unfit recruits), and continued inclusion of anaerobic running sessions to offset decreased weekly running distance. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
NASA Technical Reports Server (NTRS)
Dreher, Joseph G.
2009-01-01
For expedience in delivering dispersion guidance in the diversity of operational situations, National Weather Service Melbourne (MLB) and Spaceflight Meteorology Group (SMG) are becoming increasingly reliant on the PC-based version of the HYSPLIT model run through a graphical user interface (GUI). While the GUI offers unique advantages when compared to traditional methods, it is difficult for forecasters to run and manage in an operational environment. To alleviate the difficulty in providing scheduled real-time trajectory and concentration guidance, the Applied Meteorology Unit (AMU) configured a Linux version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) (HYSPLIT) model that ingests the National Centers for Environmental Prediction (NCEP) guidance, such as the North American Mesoscale (NAM) and the Rapid Update Cycle (RUC) models. The AMU configured the HYSPLIT system to automatically download the NCEP model products, convert the meteorological grids into HYSPLIT binary format, run the model from several pre-selected latitude/longitude sites, and post-process the data to create output graphics. In addition, the AMU configured several software programs to convert local Weather Research and Forecast (WRF) model output into HYSPLIT format.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugmire, R.J.; Solum, M.S.
This study was designed to apply {sup 13}C-nuclear magnetic resonance (NMR) spectrometry to the analysis of direct coal liquefaction process-stream materials. {sup 13}C-NMR was shown to have a high potential for application to direct coal liquefaction-derived samples in Phase II of this program. In this Phase III project, {sup 13}C-NMR was applied to a set of samples derived from the HRI Inc. bench-scale liquefaction Run CC-15. The samples include the feed coal, net products and intermediate streams from three operating periods of the run. High-resolution {sup 13}C-NMR data were obtained for the liquid samples and solid-state CP/MAS {sup 13}C-NMR datamore » were obtained for the coal and filter-cake samples. The {sup 1}C-NMR technique is used to derive a set of twelve carbon structural parameters for each sample (CONSOL Table A). Average molecular structural descriptors can then be derived from these parameters (CONSOL Table B).« less
mr: A C++ library for the matching and running of the Standard Model parameters
NASA Astrophysics Data System (ADS)
Kniehl, Bernd A.; Pikelner, Andrey F.; Veretin, Oleg L.
2016-09-01
We present the C++ program library mr that allows us to reliably calculate the values of the running parameters in the Standard Model at high energy scales. The initial conditions are obtained by relating the running parameters in the MS bar renormalization scheme to observables at lower energies with full two-loop precision. The evolution is then performed in accordance with the renormalization group equations with full three-loop precision. Pure QCD corrections to the matching and running are included through four loops. We also provide a Mathematica interface for this program library. Catalogue identifier: AFAI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFAI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 517613 No. of bytes in distributed program, including test data, etc.: 2358729 Distribution format: tar.gz Programming language: C++. Computer: IBM PC. Operating system: Linux, Mac OS X. RAM: 1 GB Classification: 11.1. External routines: TSIL [1], OdeInt [2], boost [3] Nature of problem: The running parameters of the Standard Model renormalized in the MS bar scheme at some high renormalization scale, which is chosen by the user, are evaluated in perturbation theory as precisely as possible in two steps. First, the initial conditions at the electroweak energy scale are evaluated from the Fermi constant GF and the pole masses of the W, Z, and Higgs bosons and the bottom and top quarks including the full two-loop threshold corrections. Second, the evolution to the high energy scale is performed by numerically solving the renormalization group evolution equations through three loops. Pure QCD corrections to the matching and running are included through four loops. Solution method: Numerical integration of analytic expressions Additional comments: Available for download from URL: http://apik.github.io/mr/. The MathLink interface is tested to work with Mathematica 7-9 and, with an additional flag, also with Mathematica 10 under Linux and with Mathematica 10 under Mac OS X. Running time: less than 1 second References: [1] S. P. Martin and D. G. Robertson, Comput. Phys. Commun. 174 (2006) 133-151 [hep-ph/0501132]. [2] K. Ahnert and M. Mulansky, AIP Conf. Proc. 1389 (2011) 1586-1589 [arxiv:1110.3397 [cs.MS
Physical Activity and Psychological Correlates during an After-School Running Club
ERIC Educational Resources Information Center
Kahan, David; McKenzie, Thomas L.
2018-01-01
Background: After-school programs (ASPs) have the potential to contribute to moderate-to-vigorous physical activity (MVPA), but there is limited empirical evidence to guide their development and implementation. Purpose: This study assessed the replication of an elementary school running program and identified psychological correlates of children's…
Fourth Graders' Motivation in an Elementary Physical Education Running Program
ERIC Educational Resources Information Center
Xiang, Ping; McBride, Ron E.; Bruene, April
2004-01-01
In this study we examined students' motivation in an elementary physical education running program using achievement goal theory and an expectancy-value model of achievement choice as theoretical frameworks. Fourth graders (N = 119) completed questionnaires assessing their achievement goals, expectancy-related beliefs, subjective task values, and…
Astronaut John Glenn running as part of physical training program
1962-02-20
S64-14883 (1962) --- Astronaut John H. Glenn Jr., pilot of the Mercury-Atlas 6 mission, participates in a strict physical training program, as he exemplifies by frequent running. Here he pauses during an exercise period on the beach near Cape Canaveral, Florida. Photo credit: NASA
ERIC Educational Resources Information Center
Gildersleeve, Robert; Williams, Jill
The intramural program at Arizona State University has recently undergone major reorganization. Three highlights of this year's program were the "Run to Tucson," the powerlifting meet, and the rodeo. The "Run to Tucson" involved a 126-mile football relay race from Arizona State University's campus in Tempe to the University of…
Mount, D W; Conrad, B
1986-01-01
We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780
Self-Scheduling Parallel Methods for Multiple Serial Codes with Application to WOPWOP
NASA Technical Reports Server (NTRS)
Long, Lyle N.; Brentner, Kenneth S.
2000-01-01
This paper presents a scheme for efficiently running a large number of serial jobs on parallel computers. Two examples are given of computer programs that run relatively quickly, but often they must be run numerous times to obtain all the results needed. It is very common in science and engineering to have codes that are not massive computing challenges in themselves, but due to the number of instances that must be run, they do become large-scale computing problems. The two examples given here represent common problems in aerospace engineering: aerodynamic panel methods and aeroacoustic integral methods. The first example simply solves many systems of linear equations. This is representative of an aerodynamic panel code where someone would like to solve for numerous angles of attack. The complete code for this first example is included in the appendix so that it can be readily used by others as a template. The second example is an aeroacoustics code (WOPWOP) that solves the Ffowcs Williams Hawkings equation to predict the far-field sound due to rotating blades. In this example, one quite often needs to compute the sound at numerous observer locations, hence parallelization is utilized to automate the noise computation for a large number of observers.
The MOLDY short-range molecular dynamics package
NASA Astrophysics Data System (ADS)
Ackland, G. J.; D'Mellow, K.; Daraszewicz, S. L.; Hepburn, D. J.; Uhrin, M.; Stratford, K.
2011-12-01
We describe a parallelised version of the MOLDY molecular dynamics program. This Fortran code is aimed at systems which may be described by short-range potentials and specifically those which may be addressed with the embedded atom method. This includes a wide range of transition metals and alloys. MOLDY provides a range of options in terms of the molecular dynamics ensemble used and the boundary conditions which may be applied. A number of standard potentials are provided, and the modular structure of the code allows new potentials to be added easily. The code is parallelised using OpenMP and can therefore be run on shared memory systems, including modern multicore processors. Particular attention is paid to the updates required in the main force loop, where synchronisation is often required in OpenMP implementations of molecular dynamics. We examine the performance of the parallel code in detail and give some examples of applications to realistic problems, including the dynamic compression of copper and carbon migration in an iron-carbon alloy. Program summaryProgram title: MOLDY Catalogue identifier: AEJU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 2 No. of lines in distributed program, including test data, etc.: 382 881 No. of bytes in distributed program, including test data, etc.: 6 705 242 Distribution format: tar.gz Programming language: Fortran 95/OpenMP Computer: Any Operating system: Any Has the code been vectorised or parallelized?: Yes. OpenMP is required for parallel execution RAM: 100 MB or more Classification: 7.7 Nature of problem: Moldy addresses the problem of many atoms (of order 10 6) interacting via a classical interatomic potential on a timescale of microseconds. It is designed for problems where statistics must be gathered over a number of equivalent runs, such as measuring thermodynamic properities, diffusion, radiation damage, fracture, twinning deformation, nucleation and growth of phase transitions, sputtering etc. In the vast majority of materials, the interactions are non-pairwise, and the code must be able to deal with many-body forces. Solution method: Molecular dynamics involves integrating Newton's equations of motion. MOLDY uses verlet (for good energy conservation) or predictor-corrector (for accurate trajectories) algorithms. It is parallelised using open MP. It also includes a static minimisation routine to find the lowest energy structure. Boundary conditions for surfaces, clusters, grain boundaries, thermostat (Nose), barostat (Parrinello-Rahman), and externally applied strain are provided. The initial configuration can be either a repeated unit cell or have all atoms given explictly. Initial velocities are generated internally, but it is also possible to specify the velocity of a particular atom. A wide range of interatomic force models are implemented, including embedded atom, Morse or Lennard-Jones. Thus the program is especially well suited to calculations of metals. Restrictions: The code is designed for short-ranged potentials, and there is no Ewald sum. Thus for long range interactions where all particles interact with all others, the order- N scaling will fail. Different interatomic potential forms require recompilation of the code. Additional comments: There is a set of associated open-source analysis software for postprocessing and visualisation. This includes local crystal structure recognition and identification of topological defects. Running time: A set of test modules for running time are provided. The code scales as order N. The parallelisation shows near-linear scaling with number of processors in a shared memory environment. A typical run of a few tens of nanometers for a few nanoseconds will run on a timescale of days on a multiprocessor desktop.
Case Studies in Application of System Engineering Practices to Capstone Projects
NASA Technical Reports Server (NTRS)
Murphy, Gloria; vanSusante, Paul; Carmen, Christina; Morris, Tommy; Schmidt, Peter; Zalewski, Janusz
2011-01-01
The Exploration Systems Mission Directorate (ESMD) of the National Aeronautics and Space Administration (NASA) sponsors a faculty fellowship program that engages researchers with interests aligned with current ESMD development programs. The faculty-members are committed to run a capstone senior design project based- on the materials and experience gained during the fellowship. For the 2010 - 2011 academic year, 5 projects were approved. These projects are in the areas of mechanical and electrical hardware design and optimization, fault prediction and extra planetary civil site preparation. This work summarizes the projects, describes the student teams performing the work, and comments on the integration of Systems Engineering principles into the projects, as well as the affected course curriculums.
Aerodynamic preliminary analysis system 2. Part 1: Theory
NASA Technical Reports Server (NTRS)
Bonner, E.; Clever, W.; Dunn, K.
1981-01-01
A subsonic/supersonic/hypersonic aerodynamic analysis was developed by integrating the Aerodynamic Preliminary Analysis System (APAS), and the inviscid force calculation modules of the Hypersonic Arbitrary Body Program. APAS analysis was extended for nonlinear vortex forces using a generalization of the Polhamus analogy. The interactive system provides appropriate aerodynamic models for a single input geometry data base and has a run/output format similar to a wind tunnel test program. The user's manual was organized to cover the principle system activities of a typical application, geometric input/editing, aerodynamic evaluation, and post analysis review/display. Sample sessions are included to illustrate the specific task involved and are followed by a comprehensive command/subcommand dictionary used to operate the system.
Automata-Based Verification of Temporal Properties on Running Programs
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)
2001-01-01
This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Aiman; Laguna, Ignacio; Sato, Kento
Future high-performance computing systems may face frequent failures with their rapid increase in scale and complexity. Resilience to faults has become a major challenge for large-scale applications running on supercomputers, which demands fault tolerance support for prevalent MPI applications. Among failure scenarios, process failures are one of the most severe issues as they usually lead to termination of applications. However, the widely used MPI implementations do not provide mechanisms for fault tolerance. We propose FTA-MPI (Fault Tolerance Assistant MPI), a programming model that provides support for failure detection, failure notification and recovery. Specifically, FTA-MPI exploits a try/catch model that enablesmore » failure localization and transparent recovery of process failures in MPI applications. We demonstrate FTA-MPI with synthetic applications and a molecular dynamics code CoMD, and show that FTA-MPI provides high programmability for users and enables convenient and flexible recovery of process failures.« less
Using high-performance networks to enable computational aerosciences applications
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1992-01-01
One component of the U.S. Federal High Performance Computing and Communications Program (HPCCP) is the establishment of a gigabit network to provide a communications infrastructure for researchers across the nation. This gigabit network will provide new services and capabilities, in addition to increased bandwidth, to enable future applications. An understanding of these applications is necessary to guide the development of the gigabit network and other high-performance networks of the future. In this paper we focus on computational aerosciences applications run remotely using the Numerical Aerodynamic Simulation (NAS) facility located at NASA Ames Research Center. We characterize these applications in terms of network-related parameters and relate user experiences that reveal limitations imposed by the current wide-area networking infrastructure. Then we investigate how the development of a nationwide gigabit network would enable users of the NAS facility to work in new, more productive ways.
Sofradir latest developments for infrared space detectors
NASA Astrophysics Data System (ADS)
Chorier, Philippe; Delannoy, Anne
2011-06-01
Sofradir is one of the leading companies that develop and produce infrared detectors. Space applications have become a significant activity and Sofradir relies now on 20 years of experience in development and production of MCT infrared detectors of 2nd and 3rd generation for space applications. Thanks to its capabilities and experience, Sofradir is now able to offer high reliability infrared detectors for space applications. These detectors cover various kinds of applications like hyperspectral observation, earth observations for meteorological or scientific purpose and science experiments. In this paper, we present a review of latest Sofradir's development for infrared space applications. A presentation of Sofradir infrared detectors answering hyperspectral needs from visible up to VLWIR waveband will be made. In addition a particular emphasis will be placed on the different programs currently running, with a presentation of the associated results as they relate to performances and qualifications for space use.
TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series
NASA Astrophysics Data System (ADS)
Czerwinski, Fabian; Oddershede, Lene B.
2011-02-01
With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours
A guide to State programs for the reclamation of surface mined areas
Imhoff, Edgar A.; Friz, Thomas O.; LaFevers, James R.
1976-01-01
During 1975 inquiries of agencies in each State and review of State statutes and related administrative codes revealed that 38 States have established programs requiring the reclamation of surface mined lands. Results of analyses of those programs and ancillary data are presented in : (1) A table (matrix) which has been designed for the notation and elaboration of information pertaining to the mined-area reclamation programs of the 50 States; (2) a primer on surface mining activities and related reclamation practices and problems; and (3) a listing of types of non-Federal governmental controls applicable to reclamation. Interpretations of the status and content of State programs suggest that although a common thread runs through State statutory language, administrative requirements vary from State to State in order to meet different natural, economic, social, and political considerations. A general trend is seen in State programs toward the requiring of an integration of landuse planning and mine planning, with increased local governmental involvement.
ASC Tri-lab Co-design Level 2 Milestone Report 2015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornung, Rich; Jones, Holger; Keasler, Jeff
2015-09-23
In 2015, the three Department of Energy (DOE) National Laboratories that make up the Advanced Sci- enti c Computing (ASC) Program (Sandia, Lawrence Livermore, and Los Alamos) collaboratively explored performance portability programming environments in the context of several ASC co-design proxy applica- tions as part of a tri-lab L2 milestone executed by the co-design teams at each laboratory. The programming environments that were studied included Kokkos (developed at Sandia), RAJA (LLNL), and Legion (Stan- ford University). The proxy apps studied included: miniAero, LULESH, CoMD, Kripke, and SNAP. These programming models and proxy-apps are described herein. Each lab focused on amore » particular combination of abstractions and proxy apps, with the goal of assessing performance portability using those. Performance portability was determined by: a) the ability to run a single application source code on multiple advanced architectures, b) comparing runtime performance between \
A theoretical perspective on running-related injuries.
Gallant, Jodi Lynn; Pierrynowski, Michael Raymond
2014-03-01
The etiology of running-related injuries remains unknown; however, an implicit theory underlies much of the conventional research and practice in the prevention of these injuries. This theory posits that the cause of running-related injuries lies in the high-impact forces experienced when the foot contacts the ground and the subsequent abnormal movement of the subtalar joint. The application of this theory is seen in the design of the modern running shoe, with cushioning, support, and motion control. However, a new theory is emerging that suggests that it is the use of these modern running shoes that has caused a maladaptive running style, which contributes to a high incidence of injury among runners. The suggested application of this theory is to cease use of the modern running shoe and transition to barefoot or minimalist running. This new running paradigm, which is at present inadequately defined, is proposed to avoid the adverse biomechanical effects of the modern running shoe. Future research should rigorously define and then test both theories regarding their ability to discover the etiology of running-related injury. Once discovered, the putative cause of running-related injury will then provide an evidence-based rationale for clinical prevention and treatment.
40 CFR 63.848 - Emission monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... primary control system to determine compliance with the applicable emission limit. The owner or operator... with the applicable emission limit. The owner or operator must include all valid runs in the quarterly... from at least three runs to determine compliance with the applicable emission limits. The owner or...
NASA Astrophysics Data System (ADS)
Plummer, M.; Armour, E. A. G.; Todd, A. C.; Franklin, C. P.; Cooper, J. N.
2009-12-01
We present a program used to calculate intricate three-particle integrals for variational calculations of solutions to the leptonic Schrödinger equation with two nuclear centres in which inter-leptonic distances (electron-electron and positron-electron) are included directly in the trial functions. The program has been used so far in calculations of He-H¯ interactions and positron H 2 scattering, however the precisely defined integrals are applicable to other situations. We include a summary discussion of how the program has been optimized from a 'legacy'-type code to a more modern high-performance code with a performance improvement factor of up to 1000. Program summaryProgram title: tripleint.cc Catalogue identifier: AEEV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 829 No. of bytes in distributed program, including test data, etc.: 91 798 Distribution format: tar.gz Programming language: Fortran 95 (fixed format) Computer: Modern PC (tested on AMD processor) [1], IBM Power5 [2] Cray XT4 [3], similar Operating system: Red Hat Linux [1], IBM AIX [2], UNICOS [3] Has the code been vectorized or parallelized?: Serial (multi-core shared memory may be needed for some large jobs) RAM: Dependent on parameter sizes and option to use intermediate I/O. Estimates for practical use: 0.5-2 GBytes (with intermediate I/O); 1-4 GBytes (all-memory: the preferred option). Classification: 2.4, 2.6, 2.7, 2.9, 16.5, 16.10, 20 Nature of problem: The 'tripleint.cc' code evaluates three-particle integrals needed in certain variational (in particular: Rayleigh-Ritz and generalized-Kohn) matrix elements for solution of the Schrödinger equation with two fixed centres (the solutions may then be used in subsequent dynamic nuclear calculations). Specifically the integrals are defined by Eq. (16) in the main text and contain terms proportional to r×r/r,i≠j,i≠k,j≠k, with r the distance between leptons i and j. The article also briefly describes the performance optimizations used to increase the speed of evaluation of the integrals enough to allow detailed testing and mapping of the effect of varying non-linear parameters in the variational trial functions. Solution method: Each integral is solved using prolate spheroidal coordinates and series expansions (with cut-offs) of the many-lepton expressions. 1-d integrals and sub-integrals are solved analytically by various means (the program automatically chooses the most accurate of the available methods for each set of parameters and function arguments), while two of the three integrations over the prolate spheroidal coordinates ' λ' are carried out numerically. Many similar integrals with identical non-linear variational parameters may be calculated with one call of the code. Restrictions: There are limits to the number of points for the numerical integrations, to the cut-off variable itaumax for the many-lepton series expansions, and to the maximum powers of Slater-like input functions. For runs near the limit of the cut-off variable and with certain small-magnitude values of variational non-linear parameters, the code can require large amounts of memory (an option using some intermediate I/O is included to offset this). Unusual features: In addition to the program, we also present a summary description of the techniques and ideology used to optimize the code, together with accuracy tests and indications of performance improvement. Running time: The test runs take 1-15 minutes on HPCx [2] as indicated in Section 5 of the main text. A practical run with 729 integrals, 40 quadrature points per dimension and itaumax = 8 took 150 minutes on a PC (e.g., [1]): a similar run with 'medium' accuracy, e.g. for parameter optimization (see Section 2 of the main text), with 30 points per dimension and itaumax = 6 took 35 minutes. References:PC: Memory: 2.72 GB, CPU: AMD Opteron 246 dual-core, 2×2 GHz, OS: GNU/Linux, kernel: Linux 2.6.9-34.0.2.ELsmp. HPCx, IBM eServer 575 running IBM AIX, http://www.hpcx.ac.uk/ (visited May 2009). HECToR, CRAY XT4 running UNICOS/lc, http://www.hector.ac.uk/ (visited May 2009).
Smits, Dirk-Wouter; Huisstede, Bionka; Verhagen, Evert; van der Worp, Henk; Kluitenberg, Bas; van Middelkoop, Marienke; Hartgens, Fred; Backx, Frank
2016-11-01
To describe absenteeism and health care utilization (HCU) within 6 weeks after occurrence of running-related injuries (RRIs) among novice runners and to explore differences relating to injury and personal characteristics. Prospective cohort study. Primary care. One thousand six hundred ninety-six novice runners (18-65 years) participating in a 6-week running program ("Start-to-Run"). Injury characteristics were assessed by weekly training logs and personal characteristics by a baseline questionnaire. Data on absenteeism and HCU were collected using questionnaires at 2 and 6 weeks after the RRI occurred. A total of 185 novice runners (11%) reported an RRI during the 6-week program. Of these injured novice runners, 78% reported absence from sports, whereas only 4% reported absence from work. Fifty-one percent of the injured novice runners visited a health care professional, mostly physical therapists (PTs) rather than physicians. Absenteeism was more common among women than men and was also more common with acute RRIs than gradual-onset RRIs. As regards HCU, both the variety of professionals visited and the number of PT visits were higher among runners with muscle-tendon injuries in the ankle/foot region than among those with other RRIs. Among novice runners sustaining an RRI during a 6-week running program, over three quarters reported short-term absence from sports, whereas absence from work was very limited, and over half used professional health care. Both absence and HCU are associated with injury characteristics. In future running promotion programs (eg in Start-to-Run programs), specific attention should be paid to acute injuries and to muscle-tendon injuries in the ankle/foot region.
Injecting Artificial Memory Errors Into a Running Computer Program
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.
2008-01-01
Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.
NASA Technical Reports Server (NTRS)
Loftin, Richard B.
1987-01-01
Turbo Prolog is a recently available, compiled version of the programming language Prolog. Turbo Prolog is designed to provide not only a Prolog compiler, but also a program development environment for the IBM Personal Computer family. An evaluation of Turbo Prolog was made, comparing its features to other versions of Prolog and to the community of languages commonly used in artificial intelligence (AI) research and development. Three programs were employed to determine the execution speed of Turbo Prolog applied to various problems. The results of this evaluation demonstrated that Turbo Prolog can perform much better than many commonly employed AI languages for numerically intensive problems and can equal the speed of development languages such as OPS5+ and CLIPS, running on the IBM PC. Applications for which Turbo Prolog is best suited include those which (1) lend themselves naturally to backward-chaining approaches, (2) require extensive use of mathematics, (3) contain few rules, (4) seek to make use of the window/color graphics capabilities of the IBM PC, and (5) require linkage to programs in other languages to form a complete executable image.
Web-based application on employee performance assessment using exponential comparison method
NASA Astrophysics Data System (ADS)
Maryana, S.; Kurnia, E.; Ruyani, A.
2017-02-01
Employee performance assessment is also called a performance review, performance evaluation, or assessment of employees, is an effort to assess the achievements of staffing performance with the aim to increase productivity of employees and companies. This application helps in the assessment of employee performance using five criteria: Presence, Quality of Work, Quantity of Work, Discipline, and Teamwork. The system uses the Exponential Comparative Method and Weighting Eckenrode. Calculation results using graphs were provided to see the assessment of each employee. Programming language used in this system is written in Notepad++ and MySQL database. The testing result on the system can be concluded that this application is correspond with the design and running properly. The test conducted is structural test, functional test, and validation, sensitivity analysis, and SUMI testing.
75 FR 12521 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-16
...On February 26, 2010, the Department of Education published a 30-day comment period notice in the Federal Register (Page 8928, Column 3) seeking public comment for an information collection entitled, ``Native American Career and Technical Education Program (NACTEP)''. This notice is hereby cancelled. NACTEP 1830-0542 is the application portion of the NACTEP grant. The application does not need extension as it is the performance reporting stage of the grant. The performance report will need its own OMB number and run under a full clearance with a 60-day/30-day public comment period. The application will be discontinued until reinstatement in 2012. The Acting Director, Information Collection Clearance Division, Regulatory Information Management Services, Office of Management, hereby issues a correction notice as required by the Paperwork Reduction Act of 1995.
Running Start: 2000-01 Annual Progress Report.
ERIC Educational Resources Information Center
Hanson, Sally Zeiger
This document is a report on Washington State's Running Start program, which allows eleventh- and twelfth-grade high school students to take college courses for free at any of the 34 state community and technical colleges or at Washington State, Eastern Washington, or Western Washington universities. The program, which was started in 1990, is…
Fourth-Grade Students' Motivational Changes in an Elementary Physical Education Running Program
ERIC Educational Resources Information Center
Xiang, Ping; McBride, Ron E.; Bruene, April
2006-01-01
Achievement goal theory and the expectancy-value model of achievement choice were used to examine fourth-grade students' motivational changes in an elementary physical education running program. In fall and spring of the school year, participants (N = 113; 66 boys, 47 girls) completed questionnaires assessing achievement goals, expectancy beliefs,…
Space science for applications - The history of Landsat
NASA Technical Reports Server (NTRS)
Mach, P. E.
1981-01-01
The history of the Landsat project is discussed in terms of three historical phases, each characterized by a dominant problem. From 1964 to 1967, the challenge was to develop interagency cooperation and to achieve consensus on basic plans for the satellite. Between 1968 and 1971, the cooperating agencies had to persuade the Bureau of the Budget to provide funding for the project. Since 1972, the challenge to NASA has been to encourage applications of the Landsat data and plan the shift from an experimental program to an operational one. The tension between experimental and operational goals has run through all these phases, and the conflicts between agencies is detailed, as well as the interaction between technological and political systems.
The Chimera II Real-Time Operating System for advanced sensor-based control applications
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1992-01-01
Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.
RiboSketch: Versatile Visualization of Multi-stranded RNA and DNA Secondary Structure.
Lu, Jacob S; Bindewald, Eckart; Kasprzak, Wojciech; Shapiro, Bruce A
2018-06-15
Creating clear, visually pleasing 2D depictions of RNA and DNA strands and their interactions is important to facilitate and communicate insights related to nucleic acid structure. Here we present RiboSketch, a secondary structure image production application that enables the visualization of multistranded structures via layout algorithms, comprehensive editing capabilities, and a multitude of simulation modes. These interactive features allow RiboSketch to create publication quality diagrams for structures with a wide range of composition, size, and complexity. The program may be run in any web browser without the need for installation, or as a standalone Java application. https://binkley2.ncifcrf.gov/users/bindewae/ribosketch_web.
Cloud-based robot remote control system for smart factory
NASA Astrophysics Data System (ADS)
Wu, Zhiming; Li, Lianzhong; Xu, Yang; Zhai, Jingmei
2015-12-01
With the development of internet technologies and the wide application of robots, there is a prospect (trend/tendency) of integration between network and robots. A cloud-based robot remote control system over networks for smart factory is proposed, which enables remote users to control robots and then realize intelligent production. To achieve it, a three-layer system architecture is designed including user layer, service layer and physical layer. Remote control applications running on the cloud server is developed on Microsoft Azure. Moreover, DIV+ CSS technologies are used to design human-machine interface to lower maintenance cost and improve development efficiency. Finally, an experiment is implemented to verify the feasibility of the program.
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
NASA Technical Reports Server (NTRS)
Wright, Jeffrey; Thakur, Siddharth
2006-01-01
Loci-STREAM is an evolving computational fluid dynamics (CFD) software tool for simulating possibly chemically reacting, possibly unsteady flows in diverse settings, including rocket engines, turbomachines, oil refineries, etc. Loci-STREAM implements a pressure- based flow-solving algorithm that utilizes unstructured grids. (The benefit of low memory usage by pressure-based algorithms is well recognized by experts in the field.) The algorithm is robust for flows at all speeds from zero to hypersonic. The flexibility of arbitrary polyhedral grids enables accurate, efficient simulation of flows in complex geometries, including those of plume-impingement problems. The present version - Loci-STREAM version 0.9 - includes an interface with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library for access to enhanced linear-equation-solving programs therein that accelerate convergence toward a solution. The name "Loci" reflects the creation of this software within the Loci computational framework, which was developed at Mississippi State University for the primary purpose of simplifying the writing of complex multidisciplinary application programs to run in distributed-memory computing environments including clusters of personal computers. Loci has been designed to relieve application programmers of the details of programming for distributed-memory computers.
Educational aspects of molecular simulation
NASA Astrophysics Data System (ADS)
Allen, Michael P.
This article addresses some aspects of teaching simulation methods to undergraduates and graduate students. Simulation is increasingly a cross-disciplinary activity, which means that the students who need to learn about simulation methods may have widely differing backgrounds. Also, they may have a wide range of views on what constitutes an interesting application of simulation methods. Almost always, a successful simulation course includes an element of practical, hands-on activity: a balance always needs to be struck between treating the simulation software as a 'black box', and becoming bogged down in programming issues. With notebook computers becoming widely available, students often wish to take away the programs to run themselves, and access to raw computer power is not the limiting factor that it once was; on the other hand, the software should be portable and, if possible, free. Examples will be drawn from the author's experience in three different contexts. (1) An annual simulation summer school for graduate students, run by the UK CCP5 organization, in which practical sessions are combined with an intensive programme of lectures describing the methodology. (2) A molecular modelling module, given as part of a doctoral training centre in the Life Sciences at Warwick, for students who might not have a first degree in the physical sciences. (3) An undergraduate module in Physics at Warwick, also taken by students from other disciplines, teaching high performance computing, visualization, and scripting in the context of a physical application such as Monte Carlo simulation.
Semantic Web Infrastructure Supporting NextFrAMES Modeling Platform
NASA Astrophysics Data System (ADS)
Lakhankar, T.; Fekete, B. M.; Vörösmarty, C. J.
2008-12-01
Emerging modeling frameworks offer new ways to modelers to develop model applications by offering a wide range of software components to handle common modeling tasks such as managing space and time, distributing computational tasks in parallel processing environment, performing input/output and providing diagnostic facilities. NextFrAMES, the next generation updates to the Framework for Aquatic Modeling of the Earth System originally developed at University of New Hampshire and currently hosted at The City College of New York takes a step further by hiding most of these services from modeler behind a platform agnostic modeling platform that allows scientists to focus on the implementation of scientific concepts in the form of a new modeling markup language and through a minimalist application programming interface that provide means to implement model processes. At the core of the NextFrAMES modeling platform there is a run-time engine that interprets the modeling markup language loads the module plugins establishes the model I/O and executes the model defined by the modeling XML and the accompanying plugins. The current implementation of the run-time engine is designed for single processor or symmetric multi processing (SMP) systems but future implementation of the run-time engine optimized for different hardware architectures are anticipated. The modeling XML and the accompanying plugins define the model structure and the computational processes in a highly abstract manner, which is not only suitable for the run-time engine, but has the potential to integrate into semantic web infrastructure, where intelligent parsers can extract information about the model configurations such as input/output requirements applicable space and time scales and underlying modeling processes. The NextFrAMES run-time engine itself is also designed to tap into web enabled data services directly, therefore it can be incorporated into complex workflow to implement End-to-End application from observation to the delivery of highly aggregated information. Our presentation will discuss the web services ranging from OpenDAP and WaterOneFlow data services to metadata provided through catalog services that could serve NextFrAMES modeling applications. We will also discuss the support infrastructure needed to streamline the integration of NextFrAMES into an End-to-End application to deliver highly processed information to end users. The End-to-End application will be demonstrated through examples from the State-of-the Global Water System effort that builds on data services provided through WMO's Global Terrestrial Network for Hydrology to deliver water resources related information to policy makers for better water management. Key components of this E2E system are promoted as Community of Practice examples for the Global Observing System of Systems therefore the State-of-the Global Water System can be viewed as test case for the interoperability of the incorporated web service components.
Van Metre, P.C.
1990-01-01
A computer-program interface between a geographic-information system and a groundwater flow model links two unrelated software systems for use in developing the flow models. The interface program allows the modeler to compile and manage geographic components of a groundwater model within the geographic information system. A significant savings of time and effort is realized in developing, calibrating, and displaying the groundwater flow model. Four major guidelines were followed in developing the interface program: (1) no changes to the groundwater flow model code were to be made; (2) a data structure was to be designed within the geographic information system that follows the same basic data structure as the groundwater flow model; (3) the interface program was to be flexible enough to support all basic data options available within the model; and (4) the interface program was to be as efficient as possible in terms of computer time used and online-storage space needed. Because some programs in the interface are written in control-program language, the interface will run only on a computer with the PRIMOS operating system. (USGS)
Tong, Tom K; Fu, Frank H; Eston, Roger; Chung, Pak-Kwong; Quach, Binh; Lu, Kui
2010-11-01
This study examined the hypothesis that chronic (training) and acute (warm-up) loaded ventilatory activities applied to the inspiratory muscles (IM) in an integrated manner would augment the training volume of an interval running program. This in turn would result in additional improvement in the maximum performance of the Yo-Yo intermittent recovery test in comparison with interval training alone. Eighteen male nonprofessional athletes were allocated to either an inspiratory muscle loading (IML) group or control group. Both groups participated in a 6-week interval running program consisting of 3-4 workouts (1-3 sets of various repetitions of selected distance [100-2,400 m] per workout) per week. For the IML group, 4-week IM training (30 inspiratory efforts at 50% maximal static inspiratory pressure [P0] per set, 2 sets·d-1, 6 d·wk-1) was applied before the interval program. Specific IM warm-up (2 sets of 30 inspiratory efforts at 40% P0) was performed before each workout of the program. For the control group, neither IML was applied. In comparison with the control group, the interval training volume as indicated by the repeatability of running bouts at high intensity was approximately 27% greater in the IML group. Greater increase in the maximum performance of the Yo-Yo test (control: 16.9 ± 5.5%; IML: 30.7 ± 4.7% baseline value) was also observed after training. The enhanced exercise performance was partly attributable to the greater reductions in the sensation of breathlessness and whole-body metabolic stress during the Yo-Yo test. These findings show that the combination of chronic and acute IML into a high-intensity interval running program is a beneficial training strategy for enhancing the tolerance to high-intensity intermittent bouts of running.
NASA Astrophysics Data System (ADS)
Petersen, T. C.; Ringer, S. P.
2010-03-01
Upon discerning the mere shape of an imaged object, as portrayed by projected perimeters, the full three-dimensional scattering density may not be of particular interest. In this situation considerable simplifications to the reconstruction problem are possible, allowing calculations based upon geometric principles. Here we describe and provide an algorithm which reconstructs the three-dimensional morphology of specimens from tilt series of images for application to electron tomography. Our algorithm uses a differential approach to infer the intersection of projected tangent lines with surfaces which define boundaries between regions of different scattering densities within and around the perimeters of specimens. Details of the algorithm implementation are given and explained using reconstruction calculations from simulations, which are built into the code. An experimental application of the algorithm to a nano-sized Aluminium tip is also presented to demonstrate practical analysis for a real specimen. Program summaryProgram title: STOMO version 1.0 Catalogue identifier: AEFS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2988 No. of bytes in distributed program, including test data, etc.: 191 605 Distribution format: tar.gz Programming language: C/C++ Computer: PC Operating system: Windows XP RAM: Depends upon the size of experimental data as input, ranging from 200 Mb to 1.5 Gb Supplementary material: Sample output files, for the test run provided, are available. Classification: 7.4, 14 External routines: Dev-C++ ( http://www.bloodshed.net/devcpp.html) Nature of problem: Electron tomography of specimens for which conventional back projection may fail and/or data for which there is a limited angular range. The algorithm does not solve the tomographic back-projection problem but rather reconstructs the local 3D morphology of surfaces defined by varied scattering densities. Solution method: Reconstruction using differential geometry applied to image analysis computations. Restrictions: The code has only been tested with square images and has been developed for only single-axis tilting. Running time: For high quality reconstruction, 5-15 min
The recovery of running ability in an adolescent male after traumatic brain injury: a case study.
Moriello, Gabriele; Frear, Matthew; Seaburg, Kristin
2009-06-01
The purpose of this case study was to document outcomes after a rehabilitation program in an adolescent male after traumatic brain injury. Three years after sustaining an injury in a skiing accident, a 17-year-old boy participated in a rehabilitation program with the goal of acquiring the ability to run one mile with his peers. On initial evaluation, the individual had significant left lower extremity weakness, impaired standing balance, limited endurance, and running limitations. He was able to run 10 m wearing a plastic ankle-foot orthosis on the left side but required supervision for safety. The intervention included strength training once weekly for 17 weeks, body weight-supported, treadmill-based locomotor training once weekly for 15 weeks followed by a combination of overground locomotor training and strengthening exercise once weekly for six weeks. After the intervention, muscle strength of the lower extremities increased and the individual was able to run one mile independently. The quality of his running improved, with better mechanics to absorb forces at impact during the absorption phase and increased lower extremity extension during the propulsion phase. A rehabilitation program consisting of strengthening and locomotor training improved running speed, quality, and endurance in an adolescent male after traumatic brain injury. He was able to progress to a less restrictive carbon fiber brace as a result of gains in lower extremity strength. This change in ability allowed him to participate in physical education by running on a track and playing softball with his peers.
Code C# for chaos analysis of relativistic many-body systems
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Felea, D.; Stan, E.; Esanu, T.
2010-08-01
This work presents a new Microsoft Visual C# .NET code library, conceived as a general object oriented solution for chaos analysis of three-dimensional, relativistic many-body systems. In this context, we implemented the Lyapunov exponent and the “fragmentation level” (defined using the graph theory and the Shannon entropy). Inspired by existing studies on billiard nuclear models and clusters of galaxies, we tried to apply the virial theorem for a simplified many-body system composed by nucleons. A possible application of the “virial coefficient” to the stability analysis of chaotic systems is also discussed. Catalogue identifier: AEGH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30 053 No. of bytes in distributed program, including test data, etc.: 801 258 Distribution format: tar.gz Programming language: Visual C# .NET 2005 Computer: PC Operating system: .Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread RAM: 128 Megabytes Classification: 6.2, 6.5 External routines: .Net Framework 2.0 Library Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, and energy conservation precision test. Additional comments: Easy copy/paste based deployment method. Running time: Quadratic complexity.
Projection display technology for avionics applications
NASA Astrophysics Data System (ADS)
Kalmanash, Michael H.; Tompkins, Richard D.
2000-08-01
Avionics displays often require custom image sources tailored to demanding program needs. Flat panel devices are attractive for cockpit installations, however recent history has shown that it is not possible to sustain a business manufacturing custom flat panels in small volume specialty runs. As the number of suppliers willing to undertake this effort shrinks, avionics programs unable to utilize commercial-off-the-shelf (COTS) flat panels are placed in serious jeopardy. Rear projection technology offers a new paradigm, enabling compact systems to be tailored to specific platform needs while using a complement of COTS components. Projection displays enable improved performance, lower cost and shorter development cycles based on inter-program commonality and the wide use of commercial components. This paper reviews the promise and challenges of projection technology and provides an overview of Kaiser Electronics' efforts in developing advanced avionics displays using this approach.
Asynchronous Messaging and Data Transfer in a Spacecraft: An Implementation
NASA Technical Reports Server (NTRS)
Moholt, Joseph M.
2005-01-01
Data transfer and messaging is an important part of a spacecraft. Creating a standard protocol for messaging that can be used for a variety of applications is an extremely beneficial project at the Jet Propulsion Laboratory (JPL). The Asynchronous Messaging Service (AMS) is a protocol outlining how subsystems initialize and conduct communication between each other. There are currently two implementations of AMS in the works. At JPL, my task is to get a working implementation of AMS onto vxWorks as a proof of concept. An Autocoder, a program used to convert visually created state chart diagrams to C++, has also been created to accomplish a part of the implementation. I was assigned to make the program portable on any Unix type environment. Lastly, I was to develop a program to demonstrate messaging between two FireWire cards running vxworks.
BossPro: a biometrics-based obfuscation scheme for software protection
NASA Astrophysics Data System (ADS)
Kuseler, Torben; Lami, Ihsan A.; Al-Assam, Hisham
2013-05-01
This paper proposes to integrate biometric-based key generation into an obfuscated interpretation algorithm to protect authentication application software from illegitimate use or reverse-engineering. This is especially necessary for mCommerce because application programmes on mobile devices, such as Smartphones and Tablet-PCs are typically open for misuse by hackers. Therefore, the scheme proposed in this paper ensures that a correct interpretation / execution of the obfuscated program code of the authentication application requires a valid biometric generated key of the actual person to be authenticated, in real-time. Without this key, the real semantics of the program cannot be understood by an attacker even if he/she gains access to this application code. Furthermore, the security provided by this scheme can be a vital aspect in protecting any application running on mobile devices that are increasingly used to perform business/financial or other security related applications, but are easily lost or stolen. The scheme starts by creating a personalised copy of any application based on the biometric key generated during an enrolment process with the authenticator as well as a nuance created at the time of communication between the client and the authenticator. The obfuscated code is then shipped to the client's mobile devise and integrated with real-time biometric extracted data of the client to form the unlocking key during execution. The novelty of this scheme is achieved by the close binding of this application program to the biometric key of the client, thus making this application unusable for others. Trials and experimental results on biometric key generation, based on client's faces, and an implemented scheme prototype, based on the Android emulator, prove the concept and novelty of this proposed scheme.
NASA Astrophysics Data System (ADS)
Patanè, Domenico; Ferrari, Ferruccio; Giampiccolo, Elisabetta; Gresta, Stefano
Few automated data acquisition and processing systems operate on mainframes, some run on UNIX-based workstations and others on personal computers, equipped with either DOS/WINDOWS or UNIX-derived operating systems. Several large and complex software packages for automatic and interactive analysis of seismic data have been developed in recent years (mainly for UNIX-based systems). Some of these programs use a variety of artificial intelligence techniques. The first operational version of a new software package, named PC-Seism, for analyzing seismic data from a local network is presented in Patanè et al. (1999). This package, composed of three separate modules, provides an example of a new generation of visual object-oriented programs for interactive and automatic seismic data-processing running on a personal computer. In this work, we mainly discuss the automatic procedures implemented in the ASDP (Automatic Seismic Data-Processing) module and real time application to data acquired by a seismic network running in eastern Sicily. This software uses a multi-algorithm approach and a new procedure MSA (multi-station-analysis) for signal detection, phase grouping and event identification and location. It is designed for an efficient and accurate processing of local earthquake records provided by single-site and array stations. Results from ASDP processing of two different data sets recorded at Mt. Etna volcano by a regional network are analyzed to evaluate its performance. By comparing the ASDP pickings with those revised manually, the detection and subsequently the location capabilities of this software are assessed. The first data set is composed of 330 local earthquakes recorded in the Mt. Etna erea during 1997 by the telemetry analog seismic network. The second data set comprises about 970 automatic locations of more than 2600 local events recorded at Mt. Etna during the last eruption (July 2001) at the present network. For the former data set, a comparison of the automatic results with the manual picks indicates that the ASDP module can accurately pick 80% of the P-waves and 65% of S-waves. The on-line application on the latter data set shows that automatic locations are affected by larger errors, due to the preliminary setting of the configuration parameters in the program. However, both automatic ASDP and manual hypocenter locations are comparable within the estimated error bounds. New improvements of the PC-Seism software for on-line analysis are also discussed.
Pybus -- A Python Software Bus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavrijsen, Wim T.L.P.
2004-10-14
A software bus, just like its hardware equivalent, allows for the discovery, installation, configuration, loading, unloading, and run-time replacement of software components, as well as channeling of inter-component communication. Python, a popular open-source programming language, encourages a modular design on software written in it, but it offers little or no component functionality. However, the language and its interpreter provide sufficient hooks to implement a thin, integral layer of component support. This functionality can be presented to the developer in the form of a module, making it very easy to use. This paper describes a Pythonmodule, PyBus, with which the conceptmore » of a ''software bus'' can be realized in Python. It demonstrates, within the context of the ATLAS software framework Athena, how PyBus can be used for the installation and (run-time) configuration of software, not necessarily Python modules, from a Python application in a way that is transparent to the end-user.« less
Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation
NASA Astrophysics Data System (ADS)
Downey, W. T.; Hendrick, P. L.
1982-07-01
Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.
Certification trails and software design for testability
NASA Technical Reports Server (NTRS)
Sullivan, Gregory F.; Wilson, Dwight S.; Masson, Gerald M.
1993-01-01
Design techniques which may be applied to make program testing easier were investigated. Methods for modifying a program to generate additional data which we refer to as a certification trail are presented. This additional data is designed to allow the program output to be checked more quickly and effectively. Certification trails were described primarily from a theoretical perspective. A comprehensive attempt to assess experimentally the performance and overall value of the certification trail method is reported. The method was applied to nine fundamental, well-known algorithms for the following problems: convex hull, sorting, huffman tree, shortest path, closest pair, line segment intersection, longest increasing subsequence, skyline, and voronoi diagram. Run-time performance data for each of these problems is given, and selected problems are described in more detail. Our results indicate that there are many cases in which certification trails allow for significantly faster overall program execution time than a 2-version programming approach, and also give further evidence of the breadth of applicability of this method.
Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.
Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio
2014-07-05
A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems. Copyright © 2014 Wiley Periodicals, Inc.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
Exploiting variability for energy optimization of parallel programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavrijsen, Wim; Iancu, Costin; de Jong, Wibe
2016-04-18
Here in this paper we present optimizations that use DVFS mechanisms to reduce the total energy usage in scientific applications. Our main insight is that noise is intrinsic to large scale parallel executions and it appears whenever shared resources are contended. The presence of noise allows us to identify and manipulate any program regions amenable to DVFS. When compared to previous energy optimizations that make per core decisions using predictions of the running time, our scheme uses a qualitative approach to recognize the signature of executions amenable to DVFS. By recognizing the "shape of variability" we can optimize codes withmore » highly dynamic behavior, which pose challenges to all existing DVFS techniques. We validate our approach using offline and online analyses for one-sided and two-sided communication paradigms. We have applied our methods to NWChem, and we show best case improvements in energy use of 12% at no loss in performance when using online optimizations running on 720 Haswell cores with one-sided communication. With NWChem on MPI two-sided and offline analysis, capturing the initialization, we find energy savings of up to 20%, with less than 1% performance cost.« less
Windfield and trajectory models for tornado-propelled objects. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redmann, G.H.; Radbill, J.R.; Marte, J.E.
1983-03-01
This is the final report of a three-phased research project to develop a six-degree-of-freedom mathematical model to predict the trajectories of tornado-propelled objects. The model is based on the meteorological, aerodynamic, and dynamic processes that govern the trajectories of missiles in a tornadic windfield. The aerodynamic coefficients for the postulated missiles were obtained from full-scale wind tunnel tests on a 12-inch pipe and car and from drop tests. Rocket sled tests were run whereby the 12-inch pipe and car were injected into a worst-case tornado windfield in order to verify the trajectory model. To simplify and facilitate the use ofmore » the trajectory model for design applications without having to run the computer program, this report gives the trajectory data for NRC-postulated missiles in tables based on given variables of initial conditions of injection and tornado windfield. Complete descriptions of the tornado windfield and trajectory models are presented. The trajectory model computer program is also included for those desiring to perform trajectory or sensitivity analyses beyond those included in the report or for those wishing to examine other missiles and use other variables.« less
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
40 CFR 86.1237-96 - Dynamometer runs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Dynamometer runs. 86.1237-96 Section 86.1237-96 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Methanol-Fueled Heavy-Duty Vehicles § 86.1237-96 Dynamometer runs. Section 86.1237-96 includes text that...
40 CFR 86.1237-96 - Dynamometer runs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 19 2010-07-01 2010-07-01 false Dynamometer runs. 86.1237-96 Section 86.1237-96 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS... Methanol-Fueled Heavy-Duty Vehicles § 86.1237-96 Dynamometer runs. Section 86.1237-96 includes text that...
Global Reference Atmosphere Model (GRAM)
NASA Technical Reports Server (NTRS)
Woodrum, A. W.
1989-01-01
GRAM series of four-dimensional atmospheric model validated by years of data. GRAM program, still available. More current are Gram 86, which includes atmospheric data from 1986 and runs on DEC VAX, and GRAM 88, which runs on IBM 3084. Program generates altitude profiles of atmospheric parameters along any simulated trajectory through atmosphere, and also useful for global circulation and diffusion studies.
Before-School Running/Walking Club and Student Physical Activity Levels: An Efficacy Study
ERIC Educational Resources Information Center
Stylianou, Michalis; van der Mars, Hans; Kulinna, Pamela Hodges; Adams, Marc A.; Mahar, Matthew; Amazeen, Eric
2016-01-01
Purpose: Before-school programs, one of the least studied student-related comprehensive school physical activity program (CSPAP) components, may be a promising strategy to help youth meet the physical activity (PA) guidelines. This study's purpose was to examine: (a) how much PA children accrued during a before-school running/walking club and…
Development and deployment of a water-crop-nutrient simulation model embedded in a web application
NASA Astrophysics Data System (ADS)
Langella, Giuliano; Basile, Angelo; Coppola, Antonio; Manna, Piero; Orefice, Nadia; Terribile, Fabio
2016-04-01
It is long time by now that scientific research on environmental and agricultural issues spent large effort in the development and application of models for prediction and simulation in spatial and temporal domains. This is fulfilled by studying and observing natural processes (e.g. rainfall, water and chemicals transport in soils, crop growth) whose spatiotemporal behavior can be reproduced for instance to predict irrigation and fertilizer requirements and yield quantities/qualities. In this work a mechanistic model to simulate water flow and solute transport in the soil-plant-atmosphere continuum is presented. This desktop computer program was written according to the specific requirement of developing web applications. The model is capable to solve the following issues all together: (a) water balance and (b) solute transport; (c) crop modelling; (d) GIS-interoperability; (e) embedability in web-based geospatial Decision Support Systems (DSS); (f) adaptability at different scales of application; and (g) ease of code modification. We maintained the desktop characteristic in order to further develop (e.g. integrate novel features) and run the key program modules for testing and validation purporses, but we also developed a middleware component to allow the model run the simulations directly over the web, without software to be installed. The GIS capabilities allows the web application to make simulations in a user-defined region of interest (delimited over a geographical map) without the need to specify the proper combination of model parameters. It is possible since the geospatial database collects information on pedology, climate, crop parameters and soil hydraulic characteristics. Pedological attributes include the spatial distribution of key soil data such as soil profile horizons and texture. Further, hydrological parameters are selected according to the knowledge about the spatial distribution of soils. The availability and definition in the geospatial domain of these attributes allow the simulation outputs at a different spatial scale. Two different applications were implemented using the same framework but with different configurations of the software pieces making the physically based modelling chain: an irrigation tool simulating water requirements and their dates and a fertilization tool for optimizing in particular mineral nitrogen adds.
High-Frequency Axial Fatigue Test Procedures for Spectrum Loading
2016-07-20
histories can be performed at frequencies much higher than standard servo-hydraulic test frames by using a test frame that is optimized to run at higher...by using a test frame that is optimized to run at higher frequencies. AIR 4.3 has conducted a research program to develop a test capability for...Applied Research (BAR) program (219BAR-10-008) was initiated in 2010. The program investigated the influence of a generic rotorcraft main rotor blade root
ALPS - A LINEAR PROGRAM SOLVER
NASA Technical Reports Server (NTRS)
Viterna, L. A.
1994-01-01
Linear programming is a widely-used engineering and management tool. Scheduling, resource allocation, and production planning are all well-known applications of linear programs (LP's). Most LP's are too large to be solved by hand, so over the decades many computer codes for solving LP's have been developed. ALPS, A Linear Program Solver, is a full-featured LP analysis program. ALPS can solve plain linear programs as well as more complicated mixed integer and pure integer programs. ALPS also contains an efficient solution technique for pure binary (0-1 integer) programs. One of the many weaknesses of LP solvers is the lack of interaction with the user. ALPS is a menu-driven program with no special commands or keywords to learn. In addition, ALPS contains a full-screen editor to enter and maintain the LP formulation. These formulations can be written to and read from plain ASCII files for portability. For those less experienced in LP formulation, ALPS contains a problem "parser" which checks the formulation for errors. ALPS creates fully formatted, readable reports that can be sent to a printer or output file. ALPS is written entirely in IBM's APL2/PC product, Version 1.01. The APL2 workspace containing all the ALPS code can be run on any APL2/PC system (AT or 386). On a 32-bit system, this configuration can take advantage of all extended memory. The user can also examine and modify the ALPS code. The APL2 workspace has also been "packed" to be run on any DOS system (without APL2) as a stand-alone "EXE" file, but has limited memory capacity on a 640K system. A numeric coprocessor (80X87) is optional but recommended. The standard distribution medium for ALPS is a 5.25 inch 360K MS-DOS format diskette. IBM, IBM PC and IBM APL2 are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.
Coordinated scheduling for dynamic real-time systems
NASA Technical Reports Server (NTRS)
Natarajan, Swaminathan; Zhao, Wei
1994-01-01
In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.
Computer program for plotting and fairing wind-tunnel data
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
A detailed description of the Langley computer program PLOTWD which plots and fairs experimental wind-tunnel data is presented. The program was written for use primarily on the Langley CDC computer and CALCOMP plotters. The fundamental operating features of the program are that the input data are read and written to a random-access file for use during program execution, that the data for a selected run can be sorted and edited to delete duplicate points, and that the data can be plotted and faired using tension splines, least-squares polynomial, or least-squares cubic-spline curves. The most noteworthy feature of the program is the simplicity of the user-supplied input requirements. Several subroutines are also included that can be used to draw grid lines, zero lines, axis scale values and lables, and legends. A detailed description of the program operational features and each sub-program are presented. The general application of the program is also discussed together with the input and output for two typical plot types. A listing of the program code, user-guide, and output description are presented in appendices. The program has been in use at Langley for several years and has proven to be both easy to use and versatile.
Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing
NASA Astrophysics Data System (ADS)
Tang, Jingyin; Matyas, Corene J.
2018-02-01
Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.
: A Scalable and Transparent System for Simulating MPI Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perumalla, Kalyan S
2010-01-01
is a scalable, transparent system for experimenting with the execution of parallel programs on simulated computing platforms. The level of simulated detail can be varied for application behavior as well as for machine characteristics. Unique features of are repeatability of execution, scalability to millions of simulated (virtual) MPI ranks, scalability to hundreds of thousands of host (real) MPI ranks, portability of the system to a variety of host supercomputing platforms, and the ability to experiment with scientific applications whose source-code is available. The set of source-code interfaces supported by is being expanded to support a wider set of applications, andmore » MPI-based scientific computing benchmarks are being ported. In proof-of-concept experiments, has been successfully exercised to spawn and sustain very large-scale executions of an MPI test program given in source code form. Low slowdowns are observed, due to its use of purely discrete event style of execution, and due to the scalability and efficiency of the underlying parallel discrete event simulation engine, sik. In the largest runs, has been executed on up to 216,000 cores of a Cray XT5 supercomputer, successfully simulating over 27 million virtual MPI ranks, each virtual rank containing its own thread context, and all ranks fully synchronized by virtual time.« less
Coordinating complex decision support activities across distributed applications
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1994-01-01
Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.
Simplified programming and control of automated radiosynthesizers through unit operations.
Claggett, Shane B; Quinn, Kevin M; Lazari, Mark; Moore, Melissa D; van Dam, R Michael
2013-07-15
Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client-server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client-server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client-server architecture provided robustness and flexibility.
Simplified programming and control of automated radiosynthesizers through unit operations
2013-01-01
Background Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Methods Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client–server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. Results The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client–server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. Conclusions We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client–server architecture provided robustness and flexibility. PMID:23855995
Level-2 Milestone 3244: Deploy Dawn ID Machine for Initial Science Runs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, D
2009-09-21
This report documents the delivery, installation, integration, testing, and acceptance of the Dawn system, ASC L2 milestone 3244: Deploy Dawn ID Machine for Initial Science Runs, due September 30, 2009. The full text of the milestone is included in Attachment 1. The description of the milestone is: This milestone will be a result of work started three years ago with the planning for a multi-petaFLOPS UQ-focused platform (Sequoia) and will be satisfied when a smaller ID version of the final system is delivered, installed, integrated, tested, accepted, and deployed at LLNL for initial science runs in support of SSP mission.more » The deliverable for this milestone will be a LA petascale computing system (named Dawn) usable for code development and scaling necessary to ensure effective use of a final Sequoia platform (expected in 2011-2012), and for urgent SSP program needs. Allocation and scheduling of Dawn as an LA system will likely be performed informally, similar to what has been used for BlueGene/L. However, provision will be made to allow for dedicated access times for application scaling studies across the entire Dawn resource. The milestone was completed on April 1, 2009, when science runs began running on the Dawn system. The following sections describe the Dawn system architecture, current status, installation and integration time line, and testing and acceptance process. A project plan is included as Attachment 2. Attachment 3 is a letter certifying the handoff of the system to a nuclear weapons stockpile customer. Attachment 4 presents the results of science runs completed on the system.« less
40 CFR 86.1438 - Test run-EPA.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Test run-EPA. 86.1438 Section 86.1438 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Short Test Procedures § 86.1438 Test run—EPA. (a) This section describes the test run performed by the...
40 CFR 86.1438 - Test run-EPA.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 19 2011-07-01 2011-07-01 false Test run-EPA. 86.1438 Section 86.1438 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Short Test Procedures § 86.1438 Test run—EPA. (a) This section describes the test run performed by the...
40 CFR 86.1438 - Test run-EPA.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Test run-EPA. 86.1438 Section 86.1438 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF... Short Test Procedures § 86.1438 Test run—EPA. (a) This section describes the test run performed by the...
Jennings, M.E.; Thomas, W.O.; Riggs, H.C.
1994-01-01
For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gleicher, Frederick N.; Williamson, Richard L.; Ortensi, Javier
The MOOSE neutron transport application RATTLESNAKE was coupled to the fuels performance application BISON to provide a higher fidelity tool for fuel performance simulation. This project is motivated by the desire to couple a high fidelity core analysis program (based on the self-adjoint angular flux equations) to a high fidelity fuel performance program, both of which can simulate on unstructured meshes. RATTLESNAKE solves self-adjoint angular flux transport equation and provides a sub-pin level resolution of the multigroup neutron flux with resonance treatment during burnup or a fast transient. BISON solves the coupled thermomechanical equations for the fuel on a sub-millimetermore » scale. Both applications are able to solve their respective systems on aligned and unaligned unstructured finite element meshes. The power density and local burnup was transferred from RATTLESNAKE to BISON with the MOOSE Multiapp transfer system. Multiple depletion cases were run with one-way data transfer from RATTLESNAKE to BISON. The eigenvalues are shown to agree well with values obtained from the lattice physics code DRAGON. The one-way data transfer of power density is shown to agree with the power density obtained from an internal Lassman-style model in BISON.« less
Leschke, John M; Hunt, Matthew A
2018-05-01
Resident applicants in neurosurgery often wonder what factors impact their chances of successfully matching. Using data published by the National Residency Match Program for 2009-2016, we examined which components of the Electronic Residency Application Service application correlated with successful residency matching. Data were collected from the National Residency Match Program publication Charting Outcomes in the Match from all years it was available for neurosurgery (2009, 2011, 2014, 2016). Individual factors reported (number of contiguous ranks, research projects, publications and presentations, work experiences, volunteer experiences, United States Medical Licensing Examination Step 1 and 2 score deciles, categorical data about Alpha Omega Alpha status, Ph.D. degree, other degree, and strength of medical school National Institutes of Health funding) were aggregated for all 3 years. Categorical data were available only for U.S. seniors. Spearman correlation and χ 2 were used for ranked data and categorical data, respectively. Separate analyses were run for U.S. seniors and independent applicants. For U.S. seniors applying to neurosurgery, number of contiguous ranks, United States Medical Licensing Examination Step 1 and 2 scores, research projects, Alpha Omega Alpha status, and medical school top 40 National Institutes of Health funding were significantly associated with successful matching of applicants. Number of volunteer experiences was nearly statistically significant. For independent applicants, only United States Medical Licensing Examination Step 1 and 2 scores and number of research projects were statistically significant. This is the first study to analyze National Residency Match Program data for predictors of success in neurosurgical matching. Students applying to neurosurgery residency and their mentors should be aware of which baseline objective factors are associated with match success. Copyright © 2018 Elsevier Inc. All rights reserved.
IRDS prototyping with applications to the representation of EA/RA models
NASA Technical Reports Server (NTRS)
Lekkos, Anthony A.; Greenwood, Bruce
1988-01-01
The requirements and system overview for the Information Resources Dictionary System (IRDS) are described. A formal design specification for a scaled down IRDS implementation compatible with the proposed FIPS IRDS standard is contained. The major design objectives for this IRDS will include a menu driven user interface, implementation of basic IRDS operations, and PC compatibility. The IRDS was implemented using Smalltalk/5 object oriented programming system and an ATT 6300 personal computer running under MS-DOS 3.1. The difficulties encountered in using Smalltalk are discussed.
SMT-Aware Instantaneous Footprint Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Probir; Liu, Xu; Song, Shuaiwen
Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.
IRACproc: IRAC Post-BCD Processing
NASA Astrophysics Data System (ADS)
Schuster, Mike; Marengo, Massimo; Patten, Brian
2012-09-01
IRACproc is a software suite that facilitates the co-addition of dithered or mapped Spitzer/IRAC data to make them ready for further analysis with application to a wide variety of IRAC observing programs. The software runs within PDL, a numeric extension for Perl available from pdl.perl.org, and as stand alone perl scripts. In acting as a wrapper for the Spitzer Science Center's MOPEX software, IRACproc improves the rejection of cosmic rays and other transients in the co-added data. In addition, IRACproc performs (optional) Point Spread Function (PSF) fitting, subtraction, and masking of saturated stars.
Experience with ActiveX control for simple channel access
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timossi, C.; Nishimura, H.; McDonald, J.
2003-05-15
Accelerator control system applications at Berkeley Lab's Advanced Light Source (ALS) are typically deployed on operator consoles running Microsoft Windows 2000 and utilize EPICS[2]channel access for data access. In an effort to accommodate the wide variety of Windows based development tools and developers with little experience in network programming, ActiveX controls have been deployed on the operator stations. Use of ActiveX controls for use in the accelerator control environment has been presented previously[1]. Here we report on some of our experiences with the use and development of these controls.
NASA Astrophysics Data System (ADS)
Fisher, W. I.
2017-12-01
The rise in cloud computing, coupled with the growth of "Big Data", has lead to a migration away from local scientific data storage. The increasing size of remote scientific data sets increase, however, makes it difficult for scientists to subject them to large-scale analysis and visualization. These large datasets can take an inordinate amount of time to download; subsetting is a potential solution, but subsetting services are not yet ubiquitous. Data providers may also pay steep prices, as many cloud providers meter data based on how much data leaves their cloud service. The solution to this problem is a deceptively simple one; move data analysis and visualization tools to the cloud, so that scientists may perform data-proximate analysis and visualization. This results in increased transfer speeds, while egress costs are lowered or completely eliminated. Moving standard desktop analysis and visualization tools to the cloud is enabled via a technique called "Application Streaming". This technology allows a program to run entirely on a remote virtual machine while still allowing for interactivity and dynamic visualizations. When coupled with containerization technology such as Docker, we are able to easily deploy legacy analysis and visualization software to the cloud whilst retaining access via a desktop, netbook, a smartphone, or the next generation of hardware, whatever it may be. Unidata has created a Docker-based solution for easily adapting legacy software for Application Streaming. This technology stack, dubbed Cloudstream, allows desktop software to run in the cloud with little-to-no effort. The docker container is configured by editing text files, and the legacy software does not need to be modified in any way. This work will discuss the underlying technologies used by Cloudstream, and outline how to use Cloudstream to run and access an existing desktop application to the cloud.
Distributed run of a one-dimensional model in a regional application using SOAP-based web services
NASA Astrophysics Data System (ADS)
Smiatek, Gerhard
This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.
Numerical modeling of exciton-polariton Bose-Einstein condensate in a microcavity
NASA Astrophysics Data System (ADS)
Voronych, Oksana; Buraczewski, Adam; Matuszewski, Michał; Stobińska, Magdalena
2017-06-01
A novel, optimized numerical method of modeling of an exciton-polariton superfluid in a semiconductor microcavity was proposed. Exciton-polaritons are spin-carrying quasiparticles formed from photons strongly coupled to excitons. They possess unique properties, interesting from the point of view of fundamental research as well as numerous potential applications. However, their numerical modeling is challenging due to the structure of nonlinear differential equations describing their evolution. In this paper, we propose to solve the equations with a modified Runge-Kutta method of 4th order, further optimized for efficient computations. The algorithms were implemented in form of C++ programs fitted for parallel environments and utilizing vector instructions. The programs form the EPCGP suite which has been used for theoretical investigation of exciton-polaritons. Catalogue identifier: AFBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: BSD-3 No. of lines in distributed program, including test data, etc.: 2157 No. of bytes in distributed program, including test data, etc.: 498994 Distribution format: tar.gz Programming language: C++ with OpenMP extensions (main numerical program), Python (helper scripts). Computer: Modern PC (tested on AMD and Intel processors), HP BL2x220. Operating system: Unix/Linux and Windows. Has the code been vectorized or parallelized?: Yes (OpenMP) RAM: 200 MB for single run Classification: 7, 7.7. Nature of problem: An exciton-polariton superfluid is a novel, interesting physical system allowing investigation of high temperature Bose-Einstein condensation of exciton-polaritons-quasiparticles carrying spin. They have brought a lot of attention due to their unique properties and potential applications in polariton-based optoelectronic integrated circuits. This is an out-of-equilibrium quantum system confined within a semiconductor microcavity. It is described by a set of nonlinear differential equations similar in spirit to the Gross-Pitaevskii (GP) equation, but their unique properties do not allow standard GP solving frameworks to be utilized. Finding an accurate and efficient numerical algorithm as well as development of optimized numerical software is necessary for effective theoretical investigation of exciton-polaritons. Solution method: A Runge-Kutta method of 4th order was employed to solve the set of differential equations describing exciton-polariton superfluids. The method was fitted for the exciton-polariton equations and further optimized. The C++ programs utilize OpenMP extensions and vector operations in order to fully utilize the computer hardware. Running time: 6h for 100 ps evolution, depending on the values of parameters
The New Approach to Self-Achievement (N.A.S.A.) Project 2004
NASA Technical Reports Server (NTRS)
Thomas, Candace J.
2004-01-01
The New Approach to Self-Achievement Program is designed to target rising seventh, eighth, and ninth grade students who require assistance in refining their mathematical skills, science awareness and knowledge, and test taking strategies. During the six week duration of the program, students are challenged in these areas through the application of robotic and aeronautic projects which encourage the students to practically apply their mathematical and science awareness accordingly. The first three weeks of my tenure were designated to assisting Mrs. Tammy Allen in the preparation of the 2004 NASA Project. As her assistant, I was held accountable for organizing, filing, preparing, analyzing, and completing the applications for the NASA Project. Additionally, I constructed the apposite databases which contained imperative information which aided in the selection of our participants. During the latter portion of those three weeks, Mrs. Allen, various staff members, and I, interviewed the numerous first-time applicants of the NASA Project. Furthermore, I was assigned to contact the accepted applicants of the program and provide all necessary information for the initiation of the child into the NASA Project. During the six week duration of the program, I will be working as a Project Leader at the Lorain Middle School site located in Lorain, Oh, with Mr. Fondriest Fountain. Mr. Fountain and I Will work with the eighth and ninth grade students in constructing robots, in which the students are told are made for NASA research which is being conducted on the surface of planet Mars. The robots, which are built from LEGOS and programmed through RoboLab computer software, are prepared to complete assigned Missions such as running obstacle courses; plowing and retrieving LEGOS; and scanning surfaces for intense regions of light.
Telescience Resource Kit (TReK)
NASA Technical Reports Server (NTRS)
Lippincott, Jeff
2015-01-01
Telescience Resource Kit (TReK) is one of the Huntsville Operations Support Center (HOSC) remote operations solutions. It can be used to monitor and control International Space Station (ISS) payloads from anywhere in the world. It is comprised of a suite of software applications and libraries that provide generic data system capabilities and access to HOSC services. The TReK Software has been operational since 2000. A new cross-platform version of TReK is under development. The new software is being released in phases during the 2014-2016 timeframe. The TReK Release 3.x series of software is the original TReK software that has been operational since 2000. This software runs on Windows. It contains capabilities to support traditional telemetry and commanding using CCSDS (Consultative Committee for Space Data Systems) packets. The TReK Release 4.x series of software is the new cross platform software. It runs on Windows and Linux. The new TReK software will support communication using standard IP protocols and traditional telemetry and commanding. All the software listed above is compatible and can be installed and run together on Windows. The new TReK software contains a suite of software that can be used by payload developers on the ground and onboard (TReK Toolkit). TReK Toolkit is a suite of lightweight libraries and utility applications for use onboard and on the ground. TReK Desktop is the full suite of TReK software -most useful on the ground. When TReK Desktop is released, the TReK installation program will provide the option to choose just the TReK Toolkit portion of the software or the full TReK Desktop suite. The ISS program is providing the TReK Toolkit software as a generic flight software capability offered as a standard service to payloads. TReK Software Verification was conducted during the April/May 2015 timeframe. Payload teams using the TReK software onboard can reference the TReK software verification. TReK will be demonstrated on-orbit running on an ISS provided T61p laptop. Target Timeframe: September 2015 -2016. The on-orbit demonstration will collect benchmark metrics, and will be used in the future to provide live demonstrations during ISS Payload Conferences. Benchmark metrics and demonstrations will address the protocols described in SSP 52050-0047 Ku Forward section 3.3.7. (Associated term: CCSDS File Delivery Protocol (CFDP)).
Compilation time analysis to minimize run-time overhead in preemptive scheduling on multiprocessors
NASA Astrophysics Data System (ADS)
Wauters, Piet; Lauwereins, Rudy; Peperstraete, J.
1994-10-01
This paper describes a scheduling method for hard real-time Digital Signal Processing (DSP) applications, implemented on a multi-processor. Due to the very high operating frequencies of DSP applications (typically hundreds of kHz) runtime overhead should be kept as small as possible. Because static scheduling introduces very little run-time overhead it is used as much as possible. Dynamic pre-emption of tasks is allowed if and only if it leads to better performance in spite of the extra run-time overhead. We essentially combine static scheduling with dynamic pre-emption using static priorities. Since we are dealing with hard real-time applications we must be able to guarantee at compile-time that all timing requirements will be satisfied at run-time. We will show that our method performs at least as good as any static scheduling method. It also reduces the total amount of dynamic pre-emptions compared with run time methods like deadline monotonic scheduling.
Using Earned Value Information to Predict Program Cancellation
2014-09-02
models is that when there is high cost growth in the EAC reported by the contractor, programs run far larger risks of cancellation. We find less robust...for MDAPs. Our most significant finding across models is that when there is high cost growth in the EAC reported by the contractor, programs run far...professor and received a BA in anthropology and a BA and MA in economics (2004) and a PhD in political economy and public policy (2008) from the
Running ANSYS Fluent on the WinHPC System | High-Performance Computing |
. If you don't have one, see WinHPC system user basics. Check License Use Status Start > All Jason Lustbader. Run Using Fluent Launcher Start Fluent launcher by opening: Start > All Programs > . Available node groups can be found from HPC Job Manager. Start > All Programs > Microsoft HPC Pack
NEQAIR96,Nonequilibrium and Equilibrium Radiative Transport and Spectra Program: User's Manual
NASA Technical Reports Server (NTRS)
Whiting, Ellis E.; Park, Chul; Liu, Yen; Arnold, James O.; Paterson, John A.
1996-01-01
This document is the User's Manual for a new version of the NEQAIR computer program, NEQAIR96. The program is a line-by-line and a line-of-sight code. It calculates the emission and absorption spectra for atomic and diatomic molecules and the transport of radiation through a nonuniform gas mixture to a surface. The program has been rewritten to make it easy to use, run faster, and include many run-time options that tailor a calculation to the user's requirements. The accuracy and capability have also been improved by including the rotational Hamiltonian matrix formalism for calculating rotational energy levels and Hoenl-London factors for dipole and spin-allowed singlet, doublet, triplet, and quartet transitions. Three sample cases are also included to help the user become familiar with the steps taken to produce a spectrum. A new user interface is included that uses check location, to select run-time options and to enter selected run data, making NEQAIR96 easier to use than the older versions of the code. The ease of its use and the speed of its algorithms make NEQAIR96 a valuable educational code as well as a practical spectroscopic prediction and diagnostic code.
Machine characterization and benchmark performance prediction
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.
1988-01-01
From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.
Tristan code and its application
NASA Astrophysics Data System (ADS)
Nishikawa, K.-I.
Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.
2013-01-01
Background The use of the organized sports sector as a setting for health-promotion is a relatively new strategy. In the past few years, different countries have been investing resources in the organized sports sector for promoting health-enhancing physical activity. In the Netherlands, National Sports Federations were funded to develop and implement “easily accessible” sporting programs, aimed at the least active population groups. Start to Run, a 6-week training program for novice runners, developed by the Dutch Athletics Organization, is one of these programs. In this study, the effects of Start to Run on health-enhancing physical activity were investigated. Methods Physical activity levels of Start to Run participants were assessed by means of the Short QUestionnaire to ASsess Health-enhancing physical activity (SQUASH) at baseline, immediately after completing the program and six months after baseline. A control group, matched for age and sex, was assessed at baseline and after six months. Compliance with the Dutch physical activity guidelines was the primary outcome measure. Secondary outcome measures were the total time spent in physical activity and the time spent in each physical activity intensity category and domain. Changes in physical activity within groups were tested with paired t-tests and McNemar tests. Changes between groups were examined with multiple linear and logistic regression analyses. Results In the Start to Run group, the percentage of people who met the Dutch Norm for Health-enhancing Physical Activity, Fit-norm and Combi-norm increased significantly, both in the short- and longer-term. In the control group, no significant changes in physical activity were observed. When comparing results between groups, significantly more Start to Run participants compared with control group participants were meeting the Fit-norm and Combi-norm after six months. The differences in physical activity between groups in favor of the Start to Run group could be explained by an increase in the time spent in vigorous-intensity activities and sports activities. Conclusions Start to Run positively influences levels of health-enhancing physical activity of participants, both in the short- and longer-term. Based on these results, the use of the organized sports sector as a setting to promote health-enhancing physical activity seems promising. PMID:23898920
Design, Implementation and Case Study of WISEMAN: WIreless Sensors Employing Mobile AgeNts
NASA Astrophysics Data System (ADS)
González-Valenzuela, Sergio; Chen, Min; Leung, Victor C. M.
We describe the practical implementation of Wiseman: our proposed scheme for running mobile agents in Wireless Sensor Networks. Wiseman’s architecture derives from a much earlier agent system originally conceived for distributed process coordination in wired networks. Given the memory constraints associated with small sensor devices, we revised the architecture of the original agent system to make it applicable to this type of networks. Agents are programmed as compact text scripts that are interpreted at the sensor nodes. Wiseman is currently implemented in TinyOS ver. 1, its binary image occupies 19Kbytes of ROM memory, and it occupies 3Kbytes of RAM to operate. We describe the rationale behind Wiseman’s interpreter architecture and unique programming features that can help reduce packet overhead in sensor networks. In addition, we gauge the proposed system’s efficiency in terms of task duration with different network topologies through a case study that involves an early-fire-detection application in a fictitious forest setting.
Scilab software package for the study of dynamical systems
NASA Astrophysics Data System (ADS)
Bordeianu, C. C.; Beşliu, C.; Jipa, Al.; Felea, D.; Grossu, I. V.
2008-05-01
This work presents a new software package for the study of chaotic flows and maps. The codes were written using Scilab, a software package for numerical computations providing a powerful open computing environment for engineering and scientific applications. It was found that Scilab provides various functions for ordinary differential equation solving, Fast Fourier Transform, autocorrelation, and excellent 2D and 3D graphical capabilities. The chaotic behaviors of the nonlinear dynamics systems were analyzed using phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropy. Various well known examples are implemented, with the capability of the users inserting their own ODE. Program summaryProgram title: Chaos Catalogue identifier: AEAP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 885 No. of bytes in distributed program, including test data, etc.: 5925 Distribution format: tar.gz Programming language: Scilab 3.1.1 Computer: PC-compatible running Scilab on MS Windows or Linux Operating system: Windows XP, Linux RAM: below 100 Megabytes Classification: 6.2 Nature of problem: Any physical model containing linear or nonlinear ordinary differential equations (ODE). Solution method: Numerical solving of ordinary differential equations. The chaotic behavior of the nonlinear dynamical system is analyzed using Poincaré sections, phase-space maps, autocorrelation functions, power spectra, Lyapunov exponents and Kolmogorov-Sinai entropies. Restrictions: The package routines are normally able to handle ODE systems of high orders (up to order twelve and possibly higher), depending on the nature of the problem. Running time: 10 to 20 seconds for problems that do not involve Lyapunov exponents calculation; 60 to 1000 seconds for problems that involve high orders ODE and Lyapunov exponents calculation.
14 CFR 23.59 - Takeoff distance and takeoff run.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Takeoff distance and takeoff run. 23.59... Takeoff distance and takeoff run. For each commuter category airplane, the takeoff distance and, at the option of the applicant, the takeoff run, must be determined. (a) Takeoff distance is the greater of— (1...
14 CFR 23.59 - Takeoff distance and takeoff run.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Takeoff distance and takeoff run. 23.59... Takeoff distance and takeoff run. For each commuter category airplane, the takeoff distance and, at the option of the applicant, the takeoff run, must be determined. (a) Takeoff distance is the greater of— (1...
Using Runtime Analysis to Guide Model Checking of Java Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Norvig, Peter (Technical Monitor)
2001-01-01
This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.
NASA Technical Reports Server (NTRS)
Mcenulty, R. E.
1977-01-01
The G189A simulation of the Shuttle Orbiter ECLSS was upgraded. All simulation library versions and simulation models were converted from the EXEC2 to the EXEC8 computer system and a new program, G189PL, was added to the combination master program library. The program permits the post-plotting of up to 100 frames of plot data over any time interval of a G189 simulation run. The overlay structure of the G189A simulations were restructured for the purpose of conserving computer core requirements and minimizing run time requirements.
NASA Technical Reports Server (NTRS)
Jacobson, Allan S.; Berkin, Andrew L.
1995-01-01
The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration system resulting from a NASA Jet Propulsion Laboratory (JPL) program of research into the application of graphical methods for rapidly accessing, displaying, and analyzing large multi variate multidisciplinary data sets. Running under UNIX it is an integrated multi-application executing environment using a data-linking paradigm to dynamically interconnect and control multiple windows containing a variety of displays and manipulators. This paradigm, resulting in a system similar to a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but leads to a highly intuitive, easy-to-learn user interface. It provides great flexibility in rapidly interacting with large masses of complex data to detect trends, correlations, and anomalies. The system, containing an expanding suite of non-domain-specific applications, provides for the ingestion of a variety of data base formats and hard -copy output of all displays. Remote networked workstations running LinkWinds may be interconnected, providing a multiuser science environment (MUSE) for collaborative data exploration by a distributed science team. The system is being developed in close collaboration with investigators in a variety of science disciplines using both archived and real-time data. It is currently being used to support the Microwave Limb Sounder (MLS) in orbit aboard the Upper Atmosphere Research Satellite (UARS). This paper describes the application of LinkWinds to this data to rapidly detect features, such as the ozone hole configuration, and to analyze correlations between chemical constituents of the atmosphere.
One University's Strategy for Keeping International Projects Running Smoothly
ERIC Educational Resources Information Center
Fischer, Karin
2009-01-01
This article describes how a university tackled some of the basic challenges of internationalizing its campuses. The University of Washington created the Global Support Project, a one-stop shop for faculty and staff members doing research or running programs abroad. The project is run by senior administrators but relies on designated go-to people…
Accounting utility for determining individual usage of production level software systems
NASA Technical Reports Server (NTRS)
Garber, S. C.
1984-01-01
An accounting package was developed which determines the computer resources utilized by a user during the execution of a particular program and updates a file containing accumulated resource totals. The accounting package is divided into two separate programs. The first program determines the total amount of computer resources utilized by a user during the execution of a particular program. The second program uses these totals to update a file containing accumulated totals of computer resources utilized by a user for a particular program. This package is useful to those persons who have several other users continually accessing and running programs from their accounts. The package provides the ability to determine which users are accessing and running specified programs along with their total level of usage.
GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography
NASA Technical Reports Server (NTRS)
Roark, J. H.; Masuoka, C. M.; Frey, H. V.
2004-01-01
GRIDVIEW is being developed by the GEODYNAMICS Branch at NASA's Goddard Space Flight Center and can be downloaded on the web at http://geodynamics.gsfc.nasa.gov/gridview/. The program is very mature and has been successfully used for more than four years, but is still under development as we add new features for data analysis and visualization. The software can run on any computer supported by the IDL virtual machine application supplied by RSI. The virtual machine application is currently available for recent versions of MS Windows, MacOS X, Red Hat Linux and UNIX. Minimum system memory requirement is 32 MB, however loading large data sets may require larger amounts of RAM to function adequately.
NASA Technical Reports Server (NTRS)
Horne, W. B.; Yager, T. J.; Sleeper, R. K.; Merritt, L. R.
1977-01-01
The stopping distance, brake application velocity, and time of brake application were measured for two modern jet transports, along with the NASA diagonal-braked vehicle and the British Mu-Meter on several runways, which when wetted, cover the range of slipperiness likely to be encountered in the United States. Tests were designed to determine if correlation between the aircraft and friction measuring vehicles exists. The test procedure, data reduction techniques, and preliminary test results obtained with the Boeing 727, the Douglas DC-9, and the ground vehicles are given. Time histories of the aircraft test run parameters are included.
NIST biometric evaluations and developments
NASA Astrophysics Data System (ADS)
Garris, Michael D.; Wilson, Charles L.
2005-05-01
This paper presents an R&D framework used by the National Institute of Standards and Technology (NIST) for biometric technology testing and evaluation. The focus of this paper is on fingerprint-based verification and identification. Since 9-11 the NIST Image Group has been mandated by Congress to run a program for biometric technology assessment and biometric systems certification. Four essential areas of activity are discussed: 1) developing test datasets, 2) conducting performance assessment; 3) technology development; and 4) standards participation. A description of activities and accomplishments are provided for each of these areas. In the process, methods of performance testing are described and results from specific biometric technology evaluations are presented. This framework is anticipated to have broad applicability to other technology and application domains.
Katome: de novo DNA assembler implemented in rust
NASA Astrophysics Data System (ADS)
Neumann, Łukasz; Nowak, Robert M.; Kuśmirek, Wiktor
2017-08-01
Katome is a new de novo sequence assembler written in the Rust programming language, designed with respect to future parallelization of the algorithms, run time and memory usage optimization. The application uses new algorithms for the correct assembly of repetitive sequences. Performance and quality tests were performed on various data, comparing the new application to `dnaasm', `ABySS' and `Velvet' genome assemblers. Quality tests indicate that the new assembler creates more contigs than well-established solutions, but the contigs have better quality with regard to mismatches per 100kbp and indels per 100kbp. Additionally, benchmarks indicate that the Rust-based implementation outperforms `dnaasm', `ABySS' and `Velvet' assemblers, written in C++, in terms of assembly time. Lower memory usage in comparison to `dnaasm' is observed.
NASA Technical Reports Server (NTRS)
Kavelund, Klaus; Barringer, Howard
2012-01-01
TraceContract is an API (Application Programming Interface) for trace analysis. A trace is a sequence of events, and can, for example, be generated by a running program, instrumented appropriately to generate events. An event can be any data object. An example of a trace is a log file containing events that a programmer has found important to record during a program execution. Trace - Contract takes as input such a trace together with a specification formulated using the API and reports on any violations of the specification, potentially calling code (reactions) to be executed when violations are detected. The software is developed as an internal DSL (Domain Specific Language) in the Scala programming language. Scala is a relatively new programming language that is specifically convenient for defining such internal DSLs due to a number of language characteristics. This includes Scala s elegant combination of object-oriented and functional programming, a succinct notation, and an advanced type system. The DSL offers a combination of data-parameterized state machines and temporal logic, which is novel. As an extension of Scala, it is a very expressive and convenient log file analysis framework.
Direct liquefaction proof-of-concept program. Topical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comolli, A.G.; Lee, L.K.; Pradhan, V.R.
This report presents the results of work conducted under the DOE Proof-of-Concept Program in direct coal liquefaction at Hydrocarbon Technologies, Inc. in Lawrenceville, New Jersey, from February 1994 through April 1995. The work includes modifications to HRI`s existing 3 ton per day Process Development Unit (PDU) and completion of the second PDU run (POC Run 2) under the Program. The 45-day POC Run 2 demonstrated scale up of the Catalytic Two-Stage Liquefaction (CTSL Process) for a subbituminous Wyoming Black Thunder Mine coal to produce distillate liquid products at a rate of up to 4 barrels per ton of moisture-ash-free coal.more » The combined processing of organic hydrocarbon wastes, such as waste plastics and used tire rubber, with coal was also successfully demonstrated during the last nine days of operations of Run POC-02. Prior to the first PDU run (POC-01) in this program, a major effort was made to modify the PDU to improve reliability and to provide the flexibility to operate in several alternative modes. The Kerr McGee Rose-SR{sup SM} unit from Wilsonville, Alabama, was redesigned and installed next to the U.S. Filter installation to allow a comparison of the two solids removal systems. The 45-day CTSL Wyoming Black Thunder Mine coal demonstration run achieved several milestones in the effort to further reduce the cost of liquid fuels from coal. The primary objective of PDU Run POC-02 was to scale-up the CTSL extinction recycle process for subbituminous coal to produce a total distillate product using an in-line fixed-bed hydrotreater. Of major concern was whether calcium-carbon deposits would occur in the system as has happened in other low rank coal conversion processes. An additional objective of major importance was to study the co-liquefaction of plastics with coal and waste tire rubber with coal.« less
NASA Technical Reports Server (NTRS)
Knight, J. C.; Hamm, R. W.
1984-01-01
PASCAL/48 is a programming language for the Intel MCS-48 series of microcomputers. In particular, it can be used with the Intel 8748. It is designed to allow the programmer to control most of the instructions being generated and the allocation of storage. The language can be used instead of ASSEMBLY language in most applications while allowing the user the necessary degree of control over hardware resources. Although it is called PASCAL/48, the language differs in many ways from PASCAL. The program structure and statements of the two languages are similar, but the expression mechanism and data types are different. The PASCAL/48 cross-compiler is written in PASCAL and runs on the CDC CYBER NOS system. It generates object code in Intel hexadecimal format that can be used to program the MCS-48 series of microcomputers. This reference manual defines the language, describes the predeclared procedures, lists error messages, illustrates use, and includes language syntax diagrams.
The systematic evolution of a NASA software technology, Appendix C
NASA Technical Reports Server (NTRS)
Deregt, M. P.; Dulfer, J. E.
1972-01-01
A long range program is described whose ultimate purpose is to make possible the production of software in NASA within predictable schedule and budget constraints and with major characteristics such as size, run-time, and correctness predictable within reasonable tolerances. As part of the program a pilot NASA computer center will be chosen to apply software development and management techniques systematically and determine a set which is effective. The techniques will be developed by a Technology Group, which will guide the pilot project and be responsible for its success. The application of the technology will involve a sequence of NASA programming tasks graduated from simpler ones at first to complex systems in late phases of the project. The evaluation of the technology will be made by monitoring the operation of the software at the users' installations. In this way a coherent discipline for software design, production maintenance, and management will be evolved.
Artificial intelligence (AI) based tactical guidance for fighter aircraft
NASA Technical Reports Server (NTRS)
Mcmanus, John W.; Goodrich, Kenneth H.
1990-01-01
A research program investigating the use of artificial intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within Visual Range air combat engagements is discussed. The application of AI programming and problem solving methods in the development and implementation of the Computerized Logic For Air-to-Air Warfare Simulations (CLAWS), a second generation TDG, is presented. The knowledge-based systems used by CLAWS to aid in the tactical decision-making process are outlined in detail, and the results of tests to evaluate the performance of CLAWS versus a baseline TDG developed in FORTRAN to run in real time in the Langley Differential Maneuvering Simulator, are presented. To date, these test results have shown significant performance gains with respect to the TDG baseline in one-versus-one air combat engagements, and the AI-based TDG software has proven to be much easier to modify and maintain than the baseline FORTRAN TDG programs.