U.S. Coast Guard 1994 Oil Pollution Research Grants Publications - Part 1
DOT National Transportation Integrated Search
1996-09-01
The aim of UCSC's research program has been to attempt to bring a measure of standardization to the investigation of the acute toxicity of dispersants, oil, and their mixtures. Compilation of scientifically defensible, realistic, and easily comparabl...
PCAL: Language Support for Proof-Carrying Authorization Systems
2009-10-16
behavior of a compiled program is the same as that of the source program (Theorem 4.1) and that successfully compiled programs cannot fail due to access...semantics, formalize our compilation procedure and show that it preserves the behavior of programs. For simplicity of presentation, we abstract various...H;L ` s (6) if γ :: H;L ` s then H;L ` s↘ γ′ for some γ′. We can now show that compilation preserves the behavior of programs. More precisely, if
ToxMiner: Relating ToxCast bioactivity profiles to phenotypic outcomes
One aim of the U.S. EPA ToxCast program is to develop predictive models that use in vitro assays to screen and prioritize environmental chemicals for further evaluation of potential toxicity. One aspect of this task is the compilation, quality control and analysis of large amount...
Ada technology support for NASA-GSFC
NASA Technical Reports Server (NTRS)
1986-01-01
Utilization of the Ada programming language and environments to perform directorate functions was reviewed. The Mission and Data Operations Directorate Network (MNET) conversion effort was chosen as the first task for evaluation and assistance. The MNET project required the rewriting of the existing Network Control Program (NCP) in the Ada programming language. The DEC Ada compiler running on the VAX under WMS was used for the initial development efforts. Stress tests on the newly delivered version of the DEC Ada compiler were performed. The new Alsys Ada compiler was purchased for the IBM PC AT. A prevalidated version of the compiler was obtained. The compiler was then validated.
OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seyong; Vetter, Jeffrey S
2014-01-01
Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing andmore » implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.« less
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, M. J.
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less
Distributed memory compiler design for sparse problems
NASA Technical Reports Server (NTRS)
Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema
1991-01-01
A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.
A software methodology for compiling quantum programs
NASA Astrophysics Data System (ADS)
Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias
2018-04-01
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.
2013-12-01
First, any subproject that involved an implementation shared some implementation infrastructure with other subprojects. For example, the Plaid backend ...very same language. We followed this advice in Plaid, and we therefore implemented the compiler backend in Plaid (code generation, type checker, Æminim...programming language aimed at enforcing security properties in web and mobile applications [Nistor et al., 2013]. Wyvern therefore provides an excellent
HAL/S - The programming language for Shuttle
NASA Technical Reports Server (NTRS)
Martin, F. H.
1974-01-01
HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.
Residents' responses to wildland fire programs: a review of cognitive and behavioral studies
James D. Absher; Jerry J. Vaske; Lori B. Shelby
2009-01-01
A compilation and summary of four research studies is presented. They were aimed at developing a theoretical and practical understanding of homeownersâ attitudes and behaviors in the wildland-urban interface (WUI) in relation to the threat from wildland fires. Individual studies focused on models and methods that measured (1) value orientations (patterns of basic...
A comparative study of programming languages for next-generation astrodynamics systems
NASA Astrophysics Data System (ADS)
Eichhorn, Helge; Cano, Juan Luis; McLean, Frazer; Anderl, Reiner
2018-03-01
Due to the computationally intensive nature of astrodynamics tasks, astrodynamicists have relied on compiled programming languages such as Fortran for the development of astrodynamics software. Interpreted languages such as Python, on the other hand, offer higher flexibility and development speed thereby increasing the productivity of the programmer. While interpreted languages are generally slower than compiled languages, recent developments such as just-in-time (JIT) compilers or transpilers have been able to close this speed gap significantly. Another important factor for the usefulness of a programming language is its wider ecosystem which consists of the available open-source packages and development tools such as integrated development environments or debuggers. This study compares three compiled languages and three interpreted languages, which were selected based on their popularity within the scientific programming community and technical merit. The three compiled candidate languages are Fortran, C++, and Java. Python, Matlab, and Julia were selected as the interpreted candidate languages. All six languages are assessed and compared to each other based on their features, performance, and ease-of-use through the implementation of idiomatic solutions to classical astrodynamics problems. We show that compiled languages still provide the best performance for astrodynamics applications, but JIT-compiled dynamic languages have reached a competitive level of speed and offer an attractive compromise between numerical performance and programmer productivity.
14 CFR 1203.302 - Combination, interrelation or compilation.
Code of Federal Regulations, 2011 CFR
2011-01-01
... INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302 Combination.... Compilations of unclassified information are considered unclassified unless some additional significant factor is added in the process of compilation. For example: (a) The way unclassified information is compiled...
14 CFR 1203.302 - Combination, interrelation or compilation.
Code of Federal Regulations, 2010 CFR
2010-01-01
... INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302 Combination.... Compilations of unclassified information are considered unclassified unless some additional significant factor is added in the process of compilation. For example: (a) The way unclassified information is compiled...
Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gupta, Manish
1992-01-01
Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.
Effective Compiler Error Message Enhancement for Novice Programming Students
ERIC Educational Resources Information Center
Becker, Brett A.; Glanville, Graham; Iwashima, Ricardo; McDonnell, Claire; Goslin, Kyle; Mooney, Catherine
2016-01-01
Programming is an essential skill that many computing students are expected to master. However, programming can be difficult to learn. Successfully interpreting compiler error messages (CEMs) is crucial for correcting errors and progressing toward success in programming. Yet these messages are often difficult to understand and pose a barrier to…
Testing New Programming Paradigms with NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.
2000-01-01
Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.
The paradigm compiler: Mapping a functional language for the connection machine
NASA Technical Reports Server (NTRS)
Dennis, Jack B.
1989-01-01
The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.
1987-04-30
AiBI 895 ADA (TRADENNANE) COMPILER VALIDATION SUMMARY REPORT / HARRIS CORPORATION HA (U) INFORMATION SYSTEMS AND TECHNOLOGY CENTER W-P AFS OH ADA...Compiler Validation Summary Report : 30 APR 1986 to 30 APR 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800 6...the United States Government (Ada Joint Program Office). Adae Compiler Validation mary Report : Compiler Name: HARRIS Ada Compiler, Version 1.0 1 Host
Program structure-based blocking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.
2017-09-26
Embodiments relate to program structure-based blocking. An aspect includes receiving source code corresponding to a computer program by a compiler of a computer system. Another aspect includes determining a prefetching section in the source code by a marking module of the compiler. Yet another aspect includes performing, by a blocking module of the compiler, blocking of instructions located in the prefetching section into instruction blocks, such that the instruction blocks of the prefetching section only contain instructions that are located in the prefetching section.
1988-08-01
such as those in the vicinity of the ELF antenna because they are pollinators of flowering plants , and are therefore important to the reproductive...COPY r- Compilation of 1987 Annual Reports o of the Navy ELF Communications System C4 Ecological Monitoring Program Volume 2 of 3 Volumes: TABS D -G...Security Classification) Compilation of 1987 Annual Reports of the Navy ELF Communications System Ecological Monitoring Program (Volume 2 of 3 Volumes
Ada (Trade Name) Compiler Validation Summary Report: Alsys Inc., AlsyCOMP 003, V3.1, Wang PC 280.
1988-06-04
Compiler /a tidation Capability. A set of programs that evaluates the conformity of a compiler to the Ada languaJe speci ficat.ion, AIST/MIL-STD--18... Engineering *Ada o ;; ,es~ered trademark of the United States Government (Ada Joint Program Office) A-2 "- S.! ’S APPENDIX B APPENDIX F OF THE Ada...AND ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK The National Computing Centre Limited AREA & WORK UNIT NUMBERS Manchester, UK ŕ 11. CONTROLLING OFFICE
Testing-Based Compiler Validation for Synchronous Languages
NASA Technical Reports Server (NTRS)
Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier
2014-01-01
In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.
SEGY to ASCII: Conversion and Plotting Program
Goldman, Mark R.
1999-01-01
This report documents a computer program to convert standard 4 byte, IBM floating point SEGY files to ASCII xyz format. The program then optionally plots the seismic data using the GMT plotting package. The material for this publication is contained in a standard tar file (of99-126.tar) that is uncompressed and 726 K in size. It can be downloaded by any Unix machine. Move the tar file to the directory you wish to use it in, then type 'tar xvf of99-126.tar' The archive files (and diskette) contain a NOTE file, a README file, a version-history file, source code, a makefile for easy compilation, and an ASCII version of the documentation. The archive files (and diskette) also contain example test files, including a typical SEGY file along with the resulting ASCII xyz and postscript files. Requirements for compiling the source code into an executable are a C++ compiler. The program has been successfully compiled using Gnu's g++ version 2.8.1, and use of other compilers may require modifications to the existing source code. The g++ compiler is a free, high quality C++ compiler and may be downloaded from the ftp site: ftp://ftp.gnu.org/gnu Requirements for plotting the seismic data is the existence of the GMT plotting package. The GMT plotting package may be downloaded from the web site: http://www.soest.hawaii.edu/gmt/
Using Food as a Tool to Teach Science to 3rd Grade Students in Appalachian Ohio
ERIC Educational Resources Information Center
Duffrin, Melani W.; Hovland, Jana; Carraway-Stage, Virginia; McLeod, Sara; Duffrin, Christopher; Phillips, Sharon; Rivera, David; Saum, Diana; Johanson, George; Graham, Annette; Lee, Tammy; Bosse, Michael; Berryman, Darlene
2010-01-01
The Food, Math, and Science Teaching Enhancement Resource (FoodMASTER) Initiative is a compilation of programs aimed at using food as a tool to teach mathematics and science. In 2007 to 2008, a foods curriculum developed by professionals in nutrition and education was implemented in 10 3rd-grade classrooms in Appalachian Ohio; teachers in these…
ERIC Educational Resources Information Center
Tennessee State Board for Vocational Education, Murfreesboro. Vocational Curriculum Lab.
PRACTICAL NURSE INSTRUCTORS, IN CONFERENCE, COMPILED THIS INDIVIDUALLY PLANNED AND TESTED MATERIAL TO BE USED IN PRACTICAL NURSE EDUCATION. THIRTY-TWO LESSON PLANS ON THE SUBJECT OF MOTHER AND INFANT CARE COVER TOPICS RANGING FROM THE REPRODUCTIVE SYSTEM TO COMPLICATIONS INVOLVING THE NEWBORN. EACH PLAN INCLUDES AIM, REFERENCES, MATERIALS,…
A Compiler and Run-time System for Network Programming Languages
2012-01-01
A Compiler and Run-time System for Network Programming Languages Christopher Monsanto Princeton University Nate Foster Cornell University Rob...Foster, R. Harrison, M. Freedman, C. Monsanto , J. Rexford, A. Story, and D. Walker. Frenetic: A network programming language. In ICFP, Sep 2011. [10] A
SLEEC: Semantics-Rich Libraries for Effective Exascale Computation. Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milind, Kulkarni
SLEEC (Semantics-rich Libraries for Effective Exascale Computation) was a project funded by the Department of Energy X-Stack Program, award number DE-SC0008629. The initial project period was September 2012–August 2015. The project was renewed for an additional year, expiring August 2016. Finally, the project received a no-cost extension, leading to a final expiry date of August 2017. Modern applications, especially those intended to run at exascale, are not written from scratch. Instead, they are built by stitching together various carefully-written, hand-tuned libraries. Correctly composing these libraries is difficult, but traditional compilers are unable to effectively analyze and transform across abstraction layers.more » Domain specific compilers integrate semantic knowledge into compilers, allowing them to transform applications that use particular domain-specific languages, or domain libraries. But they do not help when new domains are developed, or applications span multiple domains. SLEEC aims to fix these problems. To do so, we are building generic compiler and runtime infrastructures that are semantics-aware but not domain-specific. By performing optimizations related to the semantics of a domain library, the same infrastructure can be made generic and apply across multiple domains.« less
Optimization guide for programs compiled under IBM FORTRAN H (OPT=2)
NASA Technical Reports Server (NTRS)
Smith, D. M.; Dobyns, A. H.; Marsh, H. M.
1977-01-01
Guidelines are given to provide the programmer with various techniques for optimizing programs when the FORTRAN IV H compiler is used with OPT=2. Subroutines and programs are described in the appendices along with a timing summary of all the examples given in the manual.
Compile-time estimation of communication costs in multicomputers
NASA Technical Reports Server (NTRS)
Gupta, Manish; Banerjee, Prithviraj
1991-01-01
An important problem facing numerous research projects on parallelizing compilers for distributed memory machines is that of automatically determining a suitable data partitioning scheme for a program. Any strategy for automatic data partitioning needs a mechanism for estimating the performance of a program under a given partitioning scheme, the most crucial part of which involves determining the communication costs incurred by the program. A methodology is described for estimating the communication costs at compile-time as functions of the numbers of processors over which various arrays are distributed. A strategy is described along with its theoretical basis, for making program transformations that expose opportunities for combining of messages, leading to considerable savings in the communication costs. For certain loops with regular dependences, the compiler can detect the possibility of pipelining, and thus estimate communication costs more accurately than it could otherwise. These results are of great significance to any parallelization system supporting numeric applications on multicomputers. In particular, they lay down a framework for effective synthesis of communication on multicomputers from sequential program references.
SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER
NASA Technical Reports Server (NTRS)
Scotti, S. J.
1994-01-01
SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seyong; Kim, Jungwon; Vetter, Jeffrey S
This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, thismore » design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.« less
MetaJC++: A flexible and automatic program transformation technique using meta framework
NASA Astrophysics Data System (ADS)
Beevi, Nadera S.; Reghu, M.; Chitraprasad, D.; Vinodchandra, S. S.
2014-09-01
Compiler is a tool to translate abstract code containing natural language terms to machine code. Meta compilers are available to compile more than one languages. We have developed a meta framework intends to combine two dissimilar programming languages, namely C++ and Java to provide a flexible object oriented programming platform for the user. Suitable constructs from both the languages have been combined, thereby forming a new and stronger Meta-Language. The framework is developed using the compiler writing tools, Flex and Yacc to design the front end of the compiler. The lexer and parser have been developed to accommodate the complete keyword set and syntax set of both the languages. Two intermediate representations have been used in between the translation of the source program to machine code. Abstract Syntax Tree has been used as a high level intermediate representation that preserves the hierarchical properties of the source program. A new machine-independent stack-based byte-code has also been devised to act as a low level intermediate representation. The byte-code is essentially organised into an output class file that can be used to produce an interpreted output. The results especially in the spheres of providing C++ concepts in Java have given an insight regarding the potential strong features of the resultant meta-language.
Map and data for Quaternary faults and folds in New Mexico
Machette, M.N.; Personius, S.F.; Kelson, K.I.; Haller, K.M.; Dart, R.L.
1998-01-01
The "World Map of Major Active Faults" Task Group is compiling a series of digital maps for the United States and other countries in the Western Hemisphere that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds; the companion database includes published information on these seismogenic features. The Western Hemisphere effort is sponsored by International Lithosphere Program (ILP) Task Group H-2, whereas the effort to compile a new map and database for the United States is funded by the Earthquake Reduction Program (ERP) through the U.S. Geological Survey. The maps and accompanying databases represent a key contribution to the new Global Seismic Hazards Assessment Program (ILP Task Group II-O) for the International Decade for Natural Disaster Reduction. This compilation, which describes evidence for surface faulting and folding in New Mexico, is the third of many similar State and regional compilations that are planned for the U.S. The compilation for West Texas is available as U.S. Geological Survey Open-File Report 96-002 (Collins and others, 1996 #993) and the compilation for Montana will be released as a Montana Bureau of Mines product (Haller and others, in press #1750).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yates, K.R.; Schreiber, A.M.; Rudolph, A.W.
The US Nuclear Regulatory Commission has initiated the Fuel Cycle Risk Assessment Program to provide risk assessment methods for assistance in the regulatory process for nuclear fuel cycle facilities other than reactors. Both the once-through cycle and plutonium recycle are being considered. A previous report generated by this program defines and describes fuel cycle facilities, or elements, considered in the program. This report, the second from the program, describes the survey and computer compilation of fuel cycle risk-related literature. Sources of available information on the design, safety, and risk associated with the defined set of fuel cycle elements were searchedmore » and documents obtained were catalogued and characterized with respect to fuel cycle elements and specific risk/safety information. Both US and foreign surveys were conducted. Battelle's computer-based BASIS information management system was used to facilitate the establishment of the literature compilation. A complete listing of the literature compilation and several useful indexes are included. Future updates of the literature compilation will be published periodically. 760 annotated citations are included.« less
ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amarasinghe, Saman
This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less
Program package for multicanonical simulations of U(1) lattice gauge theory-Second version
NASA Astrophysics Data System (ADS)
Bazavov, Alexei; Berg, Bernd A.
2013-03-01
A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.
Evaluation of HAL/S language compilability using SAMSO's Compiler Writing System (CWS)
NASA Technical Reports Server (NTRS)
Feliciano, M.; Anderson, H. D.; Bond, J. W., III
1976-01-01
NASA/Langley is engaged in a program to develop an adaptable guidance and control software concept for spacecraft such as shuttle-launched payloads. It is envisioned that this flight software be written in a higher-order language, such as HAL/S, to facilitate changes or additions. To make this adaptable software transferable to various onboard computers, a compiler writing system capability is necessary. A joint program with the Air Force Space and Missile Systems Organization was initiated to determine if the Compiler Writing System (CWS) owned by the Air Force could be utilized for this purpose. The present study explores the feasibility of including the HAL/S language constructs in CWS and the effort required to implement these constructs. This will determine the compilability of HAL/S using CWS and permit NASA/Langley to identify the HAL/S constructs desired for their applications. The study consisted of comparing the implementation of the Space Programming Language using CWS with the requirements for the implementation of HAL/S. It is the conclusion of the study that CWS already contains many of the language features of HAL/S and that it can be expanded for compiling part or all of HAL/S. It is assumed that persons reading and evaluating this report have a basic familiarity with (1) the principles of compiler construction and operation, and (2) the logical structure and applications characteristics of HAL/S and SPL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, Nicholas
The Kokkos Clang compiler is a version of the Clang C++ compiler that has been modified to perform targeted code generation for Kokkos constructs in the goal of generating highly optimized code and to provide semantic (domain) awareness throughout the compilation toolchain of these constructs such as parallel for and parallel reduce. This approach is taken to explore the possibilities of exposing the developer’s intentions to the underlying compiler infrastructure (e.g. optimization and analysis passes within the middle stages of the compiler) instead of relying solely on the restricted capabilities of C++ template metaprogramming. To date our current activities havemore » focused on correct GPU code generation and thus we have not yet focused on improving overall performance. The compiler is implemented by recognizing specific (syntactic) Kokkos constructs in order to bypass normal template expansion mechanisms and instead use the semantic knowledge of Kokkos to directly generate code in the compiler’s intermediate representation (IR); which is then translated into an NVIDIA-centric GPU program and supporting runtime calls. In addition, by capturing and maintaining the higher-level semantics of Kokkos directly within the lower levels of the compiler has the potential for significantly improving the ability of the compiler to communicate with the developer in the terms of their original programming model/semantics.« less
Continued advancement of the programming language HAL to an operational status
NASA Technical Reports Server (NTRS)
1971-01-01
The continued advancement of the programming language HAL to operational status is reported. It is demonstrated that the compiler itself can be written in HAL. A HAL-in-HAL experiment proves conclusively that HAL can be used successfully as a compiler implementation tool.
Algorithmic synthesis using Python compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej
2015-09-01
This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.
Exploiting loop level parallelism in nonprocedural dataflow programs
NASA Technical Reports Server (NTRS)
Gokhale, Maya B.
1987-01-01
Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.
Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers
NASA Technical Reports Server (NTRS)
Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj
1995-01-01
The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.
NASA Astrophysics Data System (ADS)
Cieszewski, Radoslaw; Linczuk, Maciej
2016-09-01
The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.
Programs for Testing Processor-in-Memory Computing Systems
NASA Technical Reports Server (NTRS)
Katz, Daniel S.
2006-01-01
The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards.
Compiler-assisted static checkpoint insertion
NASA Technical Reports Server (NTRS)
Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.
1992-01-01
This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.
1987-06-03
REPORT - HARRIS U-1 CORPORATION HARRIS ADA COM (Ui) ADA JOINT PROGRAM OFFICE ARLINGTON VA 93 JUN 87 NC... Report : 3 June 1987 to 3 June 1988 Harris Corp., Harris Ada Compiler, Ver. 1.0, Harris H1200 Host. Tektronix 8540A-1750A Target 6. PERFORMING ORG. REPORT ...01 -07-HAR Ada ® COMPILER VALIDATION SUMMARY REPORT : Harris Corporation Harris Ada Compiler, Version 1.0 Harris H1200 Host Tektronix
Fast computation of close-coupling exchange integrals using polynomials in a tree representation
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich
2011-03-01
The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0
Perspex machine: V. Compilation of C programs
NASA Astrophysics Data System (ADS)
Spanner, Matthew P.; Anderson, James A. D. W.
2006-01-01
The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.
Using Agent Base Models to Optimize Large Scale Network for Large System Inventories
NASA Technical Reports Server (NTRS)
Shameldin, Ramez Ahmed; Bowling, Shannon R.
2010-01-01
The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.
Programming languages for circuit design.
Pedersen, Michael; Yordanov, Boyan
2015-01-01
This chapter provides an overview of a programming language for Genetic Engineering of Cells (GEC). A GEC program specifies a genetic circuit at a high level of abstraction through constraints on otherwise unspecified DNA parts. The GEC compiler then selects parts which satisfy the constraints from a given parts database. GEC further provides more conventional programming language constructs for abstraction, e.g., through modularity. The GEC language and compiler is available through a Web tool which also provides functionality, e.g., for simulation of designed circuits.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Ground Operations Aerospace Language (GOAL). Volume 2: Compiler
NASA Technical Reports Server (NTRS)
1973-01-01
The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.
HAL/S-360 compiler test activity report
NASA Technical Reports Server (NTRS)
Helmers, C. T.
1974-01-01
The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.
Praxis language reference manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, J.H.
1981-01-01
This document is a language reference manual for the programming language Praxis. The document contains the specifications that must be met by any compiler for the language. The Praxis language was designed for systems programming in real-time process applications. Goals for the language and its implementations are: (1) highly efficient code generated by the compiler; (2) program portability; (3) completeness, that is, all programming requirements can be met by the language without needing an assembler; and (4) separate compilation to aid in design and management of large systems. The language does not provide any facilities for input/output, stack and queuemore » handling, string operations, parallel processing, or coroutine processing. These features can be implemented as routines in the language, using machine-dependent code to take advantage of facilities in the control environment on different machines.« less
Functional Programming in Computer Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Loren James; Davis, Marion Kei
We explore functional programming through a 16-week internship at Los Alamos National Laboratory. Functional programming is a branch of computer science that has exploded in popularity over the past decade due to its high-level syntax, ease of parallelization, and abundant applications. First, we summarize functional programming by listing the advantages of functional programming languages over the usual imperative languages, and we introduce the concept of parsing. Second, we discuss the importance of lambda calculus in the theory of functional programming. Lambda calculus was invented by Alonzo Church in the 1930s to formalize the concept of effective computability, and every functionalmore » language is essentially some implementation of lambda calculus. Finally, we display the lasting products of the internship: additions to a compiler and runtime system for the pure functional language STG, including both a set of tests that indicate the validity of updates to the compiler and a compiler pass that checks for illegal instances of duplicate names.« less
A Compilation of Internship Reports - 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegman M.; Morris, M.; Blackburn, N.
This compilation documents all research project undertaken by the 2012 summer Department of Energy - Workforce Development for Teachers and Scientists interns during their internship program at Brookhaven National Laboratory.
ERIC Educational Resources Information Center
Novak, Gordon S., Jr.
GLISP is a high-level computer language (based on Lisp and including Lisp as a sublanguage) which is compiled into Lisp. GLISP programs are compiled relative to a knowledge base of object descriptions, a form of abstract datatypes. A primary goal of the use of abstract datatypes in GLISP is to allow program code to be written in terms of objects,…
A review of psychiatric literature for residency training programs, 1980s.
Malmquist, C; Soth, N
1984-01-01
The authors obtained cumulated reading lists from sixteen nationally-recognized psychiatric residency programs to assess the common body of knowledge shared by recent psychiatry graduates and learn which works in psychiatry had survived from an earlier compilation in 1964 (Woods, Pieper, and Frazier, "Basic Psychiatric Literature" [2]. The new list was compiled by consensus, with the working assumptions that books of importance would appear on the list of more than one program and that a book or article's relative usefulness was related to the number of appearances on different residency lists. An updated list for the 1980s is provided from the survey and is compared to the 1964 list compiled from a survey of experts in the field of psychiatry. PMID:6378287
Compiling global name-space programs for distributed execution
NASA Technical Reports Server (NTRS)
Koelbel, Charles; Mehrotra, Piyush
1990-01-01
Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.
The Automated Instrumentation and Monitoring System (AIMS): Design and Architecture. 3.2
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Schmidt, Melisa; Schulbach, Cathy; Bailey, David (Technical Monitor)
1997-01-01
Whether a researcher is designing the 'next parallel programming paradigm', another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of such information can help computer and software architects to capture, and therefore, exploit behavioral variations among/within various parallel programs to take advantage of specific hardware characteristics. A software tool-set that facilitates performance evaluation of parallel applications on multiprocessors has been put together at NASA Ames Research Center under the sponsorship of NASA's High Performance Computing and Communications Program over the past five years. The Automated Instrumentation and Monitoring Systematic has three major software components: a source code instrumentor which automatically inserts active event recorders into program source code before compilation; a run-time performance monitoring library which collects performance data; and a visualization tool-set which reconstructs program execution based on the data collected. Besides being used as a prototype for developing new techniques for instrumenting, monitoring and presenting parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Currently, the execution of FORTRAN and C programs on the Intel Paragon and PALM workstations can be automatically instrumented and monitored. Performance data thus collected can be displayed graphically on various workstations. The process of performance tuning with AIMS will be illustrated using various NAB Parallel Benchmarks. This report includes a description of the internal architecture of AIMS and a listing of the source code.
Machine tools and fixtures: A compilation
NASA Technical Reports Server (NTRS)
1971-01-01
As part of NASA's Technology Utilizations Program, a compilation was made of technological developments regarding machine tools, jigs, and fixtures that have been produced, modified, or adapted to meet requirements of the aerospace program. The compilation is divided into three sections that include: (1) a variety of machine tool applications that offer easier and more efficient production techniques; (2) methods, techniques, and hardware that aid in the setup, alignment, and control of machines and machine tools to further quality assurance in finished products: and (3) jigs, fixtures, and adapters that are ancillary to basic machine tools and aid in realizing their greatest potential.
Proceedings of the workshop on Compilation of (Symbolic) Languages for Parallel Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, I.; Tick, E.
1991-11-01
This report comprises the abstracts and papers for the talks presented at the Workshop on Compilation of (Symbolic) Languages for Parallel Computers, held October 31--November 1, 1991, in San Diego. These unreferred contributions were provided by the participants for the purpose of this workshop; many of them will be published elsewhere in peer-reviewed conferences and publications. Our goal is planning this workshop was to bring together researchers from different disciplines with common problems in compilation. In particular, we wished to encourage interaction between researchers working in compilation of symbolic languages and those working on compilation of conventional, imperative languages. Themore » fundamental problems facing researchers interested in compilation of logic, functional, and procedural programming languages for parallel computers are essentially the same. However, differences in the basic programming paradigms have led to different communities emphasizing different species of the parallel compilation problem. For example, parallel logic and functional languages provide dataflow-like formalisms in which control dependencies are unimportant. Hence, a major focus of research in compilation has been on techniques that try to infer when sequential control flow can safely be imposed. Granularity analysis for scheduling is a related problem. The single- assignment property leads to a need for analysis of memory use in order to detect opportunities for reuse. Much of the work in each of these areas relies on the use of abstract interpretation techniques.« less
Hardware-Independent Proofs of Numerical Programs
NASA Technical Reports Server (NTRS)
Boldo, Sylvie; Nguyen, Thi Minh Tuyen
2010-01-01
On recent architectures, a numerical program may give different answers depending on the execution hardware and the compilation. Our goal is to formally prove properties about numerical programs that are true for multiple architectures and compilers. We propose an approach that states the rounding error of each floating-point computation whatever the environment. This approach is implemented in the Frama-C platform for static analysis of C code. Small case studies using this approach are entirely and automatically proved
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
A compiler and validator for flight operations on NASA space missions
NASA Astrophysics Data System (ADS)
Fonte, Sergio; Politi, Romolo; Capria, Maria Teresa; Giardino, Marco; De Sanctis, Maria Cristina
2016-07-01
In NASA missions the management and the programming of the flight systems is performed by a specific scripting language, the SASF (Spacecraft Activity Sequence File). In order to perform a check on the syntax and grammar it is necessary a compiler that stress the errors (eventually) found in the sequence file produced for an instrument on board the flight system. In our experience on Dawn mission, we developed VIRV (VIR Validator), a tool that performs checks on the syntax and grammar of SASF, runs a simulations of VIR acquisitions and eventually finds violation of the flight rules of the sequences produced. The project of a SASF compiler (SSC - Spacecraft Sequence Compiler) is ready to have a new implementation: the generalization for different NASA mission. In fact, VIRV is a compiler for a dialect of SASF; it includes VIR commands as part of SASF language. Our goal is to produce a general compiler for the SASF, in which every instrument has a library to be introduced into the compiler. The SSC can analyze a SASF, produce a log of events, perform a simulation of the instrument acquisition and check the flight rules for the instrument selected. The output of the program can be produced in GRASS GIS format and may help the operator to analyze the geometry of the acquisition.
ERIC Educational Resources Information Center
McLaughlin, James L.; Burr, Marjorie
In spring 1991, the Council of Chief Instructional Officers of New Mexico two-year institutions compiled information on current and proposed allied health programs in order to foster cooperation and planning in allied health education among the 17 institutions in the state. In summer 1991, the compilation was updated to include allied health…
A Language for Specifying Compiler Optimizations for Generic Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcock, Jeremiah J.
2007-01-01
Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less
Parallel machine architecture and compiler design facilities
NASA Technical Reports Server (NTRS)
Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex
1990-01-01
The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.
On Fusing Recursive Traversals of K-d Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
Materials sciences programs: Fiscal Year 1987
NASA Astrophysics Data System (ADS)
1987-09-01
Purpose of this report is to provide a convenient compilation and index of the DOE Materials Sciences Division programs. This compilation is primarily intended for use by administrators, managers, and scientists to help coordinate research. The report is divided into seven sections. Section A contains all Laboratory projects, Section B has all contract research projects, Section C has projects funded under the Small Business Innovation Research Program, Sections D and E have information on DOE collaborative research centers, gives distribution of funding, and Section G has various indexes.
Verified compilation of Concurrent Managed Languages
2017-11-01
designs for compiler intermediate representations that facilitate mechanized proofs and verification; and (d) a realistic case study that combines these...ideas to prove the correctness of a state-of- the-art concurrent garbage collector. 15. SUBJECT TERMS Program verification, compiler design ...Even though concurrency is a pervasive part of modern software and hardware systems, it has often been ignored in safety-critical system designs . A
The LHEA PDP 11/70 graphics processing facility users guide
NASA Technical Reports Server (NTRS)
1978-01-01
A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
Systems test facilities existing capabilities compilation
NASA Technical Reports Server (NTRS)
Weaver, R.
1981-01-01
Systems test facilities (STFS) to test total photovoltaic systems and their interfaces are described. The systems development (SD) plan is compilation of existing and planned STFs, as well as subsystem and key component testing facilities. It is recommended that the existing capabilities compilation is annually updated to provide and assessment of the STF activity and to disseminate STF capabilities, status and availability to the photovoltaics program.
PCG: A prototype incremental compilation facility for the SAGA environment, appendix F
NASA Technical Reports Server (NTRS)
Kimball, Joseph John
1985-01-01
A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.
20 CFR 637.230 - Use of incentive bonuses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... in paragraph (d) of this section, technical assistance, data and information collection and compilation, management information systems, post-program followup activities, and research and evaluation... information collection and compilation, recordkeeping, or the preparation of applications for incentive...
OSCAR API for Real-Time Low-Power Multicores and Its Performance on Multicores and SMP Servers
NASA Astrophysics Data System (ADS)
Kimura, Keiji; Mase, Masayoshi; Mikami, Hiroki; Miyamoto, Takamichi; Shirako, Jun; Kasahara, Hironori
OSCAR (Optimally Scheduled Advanced Multiprocessor) API has been designed for real-time embedded low-power multicores to generate parallel programs for various multicores from different vendors by using the OSCAR parallelizing compiler. The OSCAR API has been developed by Waseda University in collaboration with Fujitsu Laboratory, Hitachi, NEC, Panasonic, Renesas Technology, and Toshiba in an METI/NEDO project entitled "Multicore Technology for Realtime Consumer Electronics." By using the OSCAR API as an interface between the OSCAR compiler and backend compilers, the OSCAR compiler enables hierarchical multigrain parallel processing with memory optimization under capacity restriction for cache memory, local memory, distributed shared memory, and on-chip/off-chip shared memory; data transfer using a DMA controller; and power reduction control using DVFS (Dynamic Voltage and Frequency Scaling), clock gating, and power gating for various embedded multicores. In addition, a parallelized program automatically generated by the OSCAR compiler with OSCAR API can be compiled by the ordinary OpenMP compilers since the OSCAR API is designed on a subset of the OpenMP. This paper describes the OSCAR API and its compatibility with the OSCAR compiler by showing code examples. Performance evaluations of the OSCAR compiler and the OSCAR API are carried out using an IBM Power5+ workstation, an IBM Power6 high-end SMP server, and a newly developed consumer electronics multicore chip RP2 by Renesas, Hitachi and Waseda. From the results of scalability evaluation, it is found that on an average, the OSCAR compiler with the OSCAR API can exploit 5.8 times speedup over the sequential execution on the Power5+ workstation with eight cores and 2.9 times speedup on RP2 with four cores, respectively. In addition, the OSCAR compiler can accelerate an IBM XL Fortran compiler up to 3.3 times on the Power6 SMP server. Due to low-power optimization on RP2, the OSCAR compiler with the OSCAR API achieves a maximum power reduction of 84% in the real-time execution mode.
Compilation of seismic-refraction crustal data in the Soviet Union
Rodriguez, Robert; Durbin, William P.; Healy, J.H.; Warren, David H.
1964-01-01
The U.S. Geological Survey is preparing a series of terrain atlases of the Sino-Soviet bloc of nations for use in a possible nuclear-test detection program. Part of this project is concerned with the compilation and evaluation of crustal-structure data. To date, a compilation has been made of data from Russian publications that discuss seismic refraction and gravity studies of crustal structure. Although this compilation deals mainly with explosion seismic-refraction measurements, some results from earthquake studies are also included. None of the data have been evaluated.
The Automated Instrumentation and Monitoring System (AIMS) reference manual
NASA Technical Reports Server (NTRS)
Yan, Jerry; Hontalas, Philip; Listgarten, Sherry
1993-01-01
Whether a researcher is designing the 'next parallel programming paradigm,' another 'scalable multiprocessor' or investigating resource allocation algorithms for multiprocessors, a facility that enables parallel program execution to be captured and displayed is invaluable. Careful analysis of execution traces can help computer designers and software architects to uncover system behavior and to take advantage of specific application characteristics and hardware features. A software tool kit that facilitates performance evaluation of parallel applications on multiprocessors is described. The Automated Instrumentation and Monitoring System (AIMS) has four major software components: a source code instrumentor which automatically inserts active event recorders into the program's source code before compilation; a run time performance-monitoring library, which collects performance data; a trace file animation and analysis tool kit which reconstructs program execution from the trace file; and a trace post-processor which compensate for data collection overhead. Besides being used as prototype for developing new techniques for instrumenting, monitoring, and visualizing parallel program execution, AIMS is also being incorporated into the run-time environments of various hardware test beds to evaluate their impact on user productivity. Currently, AIMS instrumentors accept FORTRAN and C parallel programs written for Intel's NX operating system on the iPSC family of multi computers. A run-time performance-monitoring library for the iPSC/860 is included in this release. We plan to release monitors for other platforms (such as PVM and TMC's CM-5) in the near future. Performance data collected can be graphically displayed on workstations (e.g. Sun Sparc and SGI) supporting X-Windows (in particular, Xl IR5, Motif 1.1.3).
NASA Astrophysics Data System (ADS)
Hayashi, Akihiro; Wada, Yasutaka; Watanabe, Takeshi; Sekiguchi, Takeshi; Mase, Masayoshi; Shirako, Jun; Kimura, Keiji; Kasahara, Hironori
Heterogeneous multicores have been attracting much attention to attain high performance keeping power consumption low in wide spread of areas. However, heterogeneous multicores force programmers very difficult programming. The long application program development period lowers product competitiveness. In order to overcome such a situation, this paper proposes a compilation framework which bridges a gap between programmers and heterogeneous multicores. In particular, this paper describes the compilation framework based on OSCAR compiler. It realizes coarse grain task parallel processing, data transfer using a DMA controller, power reduction control from user programs with DVFS and clock gating on various heterogeneous multicores from different vendors. This paper also evaluates processing performance and the power reduction by the proposed framework on a newly developed 15 core heterogeneous multicore chip named RP-X integrating 8 general purpose processor cores and 3 types of accelerator cores which was developed by Renesas Electronics, Hitachi, Tokyo Institute of Technology and Waseda University. The framework attains speedups up to 32x for an optical flow program with eight general purpose processor cores and four DRP(Dynamically Reconfigurable Processor) accelerator cores against sequential execution by a single processor core and 80% of power reduction for the real-time AAC encoding.
Considerations in the Development of Reversibly Binding PET Radioligands for Brain Imaging
Pike, Victor W.
2017-01-01
The development of reversibly binding radioligands for imaging brain proteins in vivo, such as enzymes, neurotransmitter transporters, receptors and ion channels, with positron emission tomography (PET) is keenly sought for biomedical studies of neuropsychiatric disorders and for drug discovery and development, but is recognized as being highly challenging at the medicinal chemistry level. This article aims to compile and discuss the main considerations to be taken into account by chemists embarking on programs of radioligand development for PET imaging of brain protein targets. PMID:27087244
Portable Just-in-Time Specialization of Dynamically Typed Scripting Languages
NASA Astrophysics Data System (ADS)
Williams, Kevin; McCandless, Jason; Gregg, David
In this paper, we present a portable approach to JIT compilation for dynamically typed scripting languages. At runtime we generate ANSI C code and use the system's native C compiler to compile this code. The C compiler runs on a separate thread to the interpreter allowing program execution to continue during JIT compilation. Dynamic languages have variables which may change type at any point in execution. Our interpreter profiles variable types at both whole method and partial method granularity. When a frequently executed region of code is discovered, the compilation thread generates a specialized version of the region based on the profiled types. In this paper, we evaluate the level of instruction specialization achieved by our profiling scheme as well as the overall performance of our JIT.
The Mystro system: A comprehensive translator toolkit
NASA Technical Reports Server (NTRS)
Collins, W. R.; Noonan, R. E.
1985-01-01
Mystro is a system that facilities the construction of compilers, assemblers, code generators, query interpretors, and similar programs. It provides features to encourage the use of iterative enhancement. Mystro was developed in response to the needs of NASA Langley Research Center (LaRC) and enjoys a number of advantages over similar systems. There are other programs available that can be used in building translators. These typically build parser tables, usually supply the source of a parser and parts of a lexical analyzer, but provide little or no aid for code generation. In general, only the front end of the compiler is addressed. Mystro, on the other hand, emphasizes tools for both ends of a compiler.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Runtime support and compilation methods for user-specified data distributions
NASA Technical Reports Server (NTRS)
Ponnusamy, Ravi; Saltz, Joel; Choudhury, Alok; Hwang, Yuan-Shin; Fox, Geoffrey
1993-01-01
This paper describes two new ideas by which an HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow use of program arrays to describe graph connectivity, spatial location of array elements, and computational load. The second mechanism is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed information from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance results for these mechanisms from a Fortran 90D compiler implementation.
Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.
2013-01-01
SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.
A Type-Preserving Compiler Infrastructure
2002-12-01
understand this code. This is, in essence , the object encoding we use to compile Java. Before embarking on the formal translation, wemust explore onemore...call. This solution works quite well. We used Jasmin , a JVML assembler (Meyer and Down- 102 CHAPTER 7. FUNCTIONAL JAVA BYTECODE ing 1997), to generate a...European Symp. on Program. 135–149. Flanagan, Cormac, Amr Sabry, Bruce F. Duba, and Matthias Felleisen. 1993, June. “The Essence of Compiling with
Criteria for Evaluating the Performance of Compilers
1974-10-01
cannot be made to fit, then an auxiliary mechanism outside the parser might be used . Finally, changing the choice of parsing tech - nique to a...was not useful in providing a basic for compiler evaluation. The study of the first question eztablished criteria and methodb for assigning four...program. The study of the second question estab- lished criteria for defining a "compiler Gibson mix", and established methods for using this "mix" to
Automatic recognition of vector and parallel operations in a higher level language
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1971-01-01
A compiler for recognizing statements of a FORTRAN program which are suited for fast execution on a parallel or pipeline machine such as Illiac-4, Star or ASC is described. The technique employs interval analysis to provide flow information to the vector/parallel recognizer. Where profitable the compiler changes scalar variables to subscripted variables. The output of the compiler is an extension to FORTRAN which shows parallel and vector operations explicitly.
ERIC Educational Resources Information Center
Office of Postsecondary Education, Washington DC. Student Financial Assistance Programs.
This compilation includes regulations for student financial aid programs as published in the Federal Register through December 31, 1996; it includes the major regulation packages published in November and December 1996 as well as regulations going back to 1974. An introduction provides guidance on reading and understanding federal regulations. The…
Publications - GPR 2016-1 | Alaska Division of Geological & Geophysical
Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey electromagnetic and magnetic airborne geophysical survey data compilation Authors: Burns, L.E., Fugro Airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical Surveys Geophysical
Publications - GPR 2015-4 | Alaska Division of Geological & Geophysical
Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey airborne geophysical survey data compilation Authors: Burns, L.E., Geoterrex-Dighem, Stevens Exploration airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical Surveys
Publications - GPR 2015-3 | Alaska Division of Geological & Geophysical
Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey electromagnetic and magnetic airborne geophysical survey data compilation Authors: Burns, L.E., Fugro Airborne magnetic airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical
Solidify, An LLVM pass to compile LLVM IR into Solidity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kothapalli, Abhiram
The software currently compiles LLVM IR into Solidity (Ethereum’s dominant programming language) using LLVM’s pass library. Specifically, his compiler allows us to convert an arbitrary DSL into Solidity. We focus specifically on converting Domain Specific Languages into Solidity due to their ease of use, and provable properties. By creating a toolchain to compile lightweight domain-specific languages into Ethereum's dominant language, Solidity, we allow non-specialists to effectively develop safe and useful smart contracts. For example lawyers from a certain firm can have a proprietary DSL that codifies basic laws safely converted to Solidity to be securely executed on the blockchain. Inmore » another example, a simple provenance tracking language can be compiled and securely executed on the blockchain.« less
Engineering Amorphous Systems, Using Global-to-Local Compilation
NASA Astrophysics Data System (ADS)
Nagpal, Radhika
Emerging technologies are making it possible to assemble systems that incorporate myriad of information-processing units at almost no cost: smart materials, selfassembling structures, vast sensor networks, pervasive computing. How does one engineer robust and prespecified global behavior from the local interactions of immense numbers of unreliable parts? We discuss organizing principles and programming methodologies that have emerged from Amorphous Computing research, that allow us to compile a specification of global behavior into a robust program for local behavior.
Graduate Student Compiles Broadcast Courses Syllabi.
ERIC Educational Resources Information Center
Kalergis, Karen
1980-01-01
Summarizes the responses of 50 people to a questionnaire asking about the way broadcast journalism is being taught in high schools and the types of radio programs students are producing; describes syllabi for two broadcast courses that were compiled on the basis of the survey responses. (GT)
Writing and compiling code into biochemistry.
Shea, Adam; Fett, Brian; Riedel, Marc D; Parhi, Keshab
2010-01-01
This paper presents a methodology for translating iterative arithmetic computation, specified as high-level programming constructs, into biochemical reactions. From an input/output specification, we generate biochemical reactions that produce output quantities of proteins as a function of input quantities performing operations such as addition, subtraction, and scalar multiplication. Iterative constructs such as "while" loops and "for" loops are implemented by transferring quantities between protein types, based on a clocking mechanism. Synthesis first is performed at a conceptual level, in terms of abstract biochemical reactions - a task analogous to high-level program compilation. Then the results are mapped onto specific biochemical reactions selected from libraries - a task analogous to machine language compilation. We demonstrate our approach through the compilation of a variety of standard iterative functions: multiplication, exponentiation, discrete logarithms, raising to a power, and linear transforms on time series. The designs are validated through transient stochastic simulation of the chemical kinetics. We are exploring DNA-based computation via strand displacement as a possible experimental chassis.
Execution models for mapping programs onto distributed memory parallel computers
NASA Technical Reports Server (NTRS)
Sussman, Alan
1992-01-01
The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.
How to compile a curriculum vitae.
Fish, J
The previous article in this series tackled the best way to apply for a job. Increasingly, employers request a curriculum vitae as part of the application process. This article aims to assist you in compiling a c.v. by discussing its essential components and content.
NASA Technical Reports Server (NTRS)
Chien, Andrew A.; Karamcheti, Vijay; Plevyak, John; Sahrawat, Deepak
1993-01-01
Concurrent object-oriented languages, particularly fine-grained approaches, reduce the difficulty of large scale concurrent programming by providing modularity through encapsulation while exposing large degrees of concurrency. Despite these programmability advantages, such languages have historically suffered from poor efficiency. This paper describes the Concert project whose goal is to develop portable, efficient implementations of fine-grained concurrent object-oriented languages. Our approach incorporates aggressive program analysis and program transformation with careful information management at every stage from the compiler to the runtime system. The paper discusses the basic elements of the Concert approach along with a description of the potential payoffs. Initial performance results and specific plans for system development are also detailed.
In defense of compilation: A response to Davis' form and content in model-based reasoning
NASA Technical Reports Server (NTRS)
Keller, Richard
1990-01-01
In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seyong; Vetter, Jeffrey S
Computer architecture experts expect that non-volatile memory (NVM) hierarchies will play a more significant role in future systems including mobile, enterprise, and HPC architectures. With this expectation in mind, we present NVL-C: a novel programming system that facilitates the efficient and correct programming of NVM main memory systems. The NVL-C programming abstraction extends C with a small set of intuitive language features that target NVM main memory, and can be combined directly with traditional C memory model features for DRAM. We have designed these new features to enable compiler analyses and run-time checks that can improve performance and guard againstmore » a number of subtle programming errors, which, when left uncorrected, can corrupt NVM-stored data. Moreover, to enable recovery of data across application or system failures, these NVL-C features include a flexible directive for specifying NVM transactions. So that our implementation might be extended to other compiler front ends and languages, the majority of our compiler analyses are implemented in an extended version of LLVM's intermediate representation (LLVM IR). We evaluate NVL-C on a number of applications to show its flexibility, performance, and correctness.« less
Proving Correctness for Pointer Programs in a Verifying Compiler
NASA Technical Reports Server (NTRS)
Kulczycki, Gregory; Singh, Amrinder
2008-01-01
This research describes a component-based approach to proving the correctness of programs involving pointer behavior. The approach supports modular reasoning and is designed to be used within the larger context of a verifying compiler. The approach consists of two parts. When a system component requires the direct manipulation of pointer operations in its implementation, we implement it using a built-in component specifically designed to capture the functional and performance behavior of pointers. When a system component requires pointer behavior via a linked data structure, we ensure that the complexities of the pointer operations are encapsulated within the data structure and are hidden to the client component. In this way, programs that rely on pointers can be verified modularly, without requiring special rules for pointers. The ultimate objective of a verifying compiler is to prove-with as little human intervention as possible-that proposed program code is correct with respect to a full behavioral specification. Full verification for software is especially important for an agency like NASA that is routinely involved in the development of mission critical systems.
76 FR 4703 - Statement of Organization, Functions, and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-26
... regarding medical loss ratio standards and the insurance premium rate review process, and issues premium... Oriented Plan program. Collects, compiles and maintains comparative pricing data for an Internet portal... benefit from the new health insurance system. Collects, compiles and maintains comparative pricing data...
Compilation of SFA Regulations as of 6/1/2000.
ERIC Educational Resources Information Center
Department of Education, Washington, DC. Student Financial Assistance.
This compilation includes regulations for student financial aid programs as published in the Federal Register through June 1, 2000. An introduction provides guidance on reading and understanding federal regulations. The following regulations are covered: Drug Free Schools and Campuses; Family Educational Rights and Privacy; institutional…
Python based high-level synthesis compiler
NASA Astrophysics Data System (ADS)
Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard
2014-11-01
This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.
NASA Astrophysics Data System (ADS)
Steiger, Damian S.; Haener, Thomas; Troyer, Matthias
Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.
Computer Language For Optimization Of Design
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Lucas, Stephen H.
1991-01-01
SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.
Compiled MPI: Cost-Effective Exascale Applications Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Quinlan, D; Lumsdaine, A
2012-04-10
The complexity of petascale and exascale machines makes it increasingly difficult to develop applications that can take advantage of them. Future systems are expected to feature billion-way parallelism, complex heterogeneous compute nodes and poor availability of memory (Peter Kogge, 2008). This new challenge for application development is motivating a significant amount of research and development on new programming models and runtime systems designed to simplify large-scale application development. Unfortunately, DoE has significant multi-decadal investment in a large family of mission-critical scientific applications. Scaling these applications to exascale machines will require a significant investment that will dwarf the costs of hardwaremore » procurement. A key reason for the difficulty in transitioning today's applications to exascale hardware is their reliance on explicit programming techniques, such as the Message Passing Interface (MPI) programming model to enable parallelism. MPI provides a portable and high performance message-passing system that enables scalable performance on a wide variety of platforms. However, it also forces developers to lock the details of parallelization together with application logic, making it very difficult to adapt the application to significant changes in the underlying system. Further, MPI's explicit interface makes it difficult to separate the application's synchronization and communication structure, reducing the amount of support that can be provided by compiler and run-time tools. This is in contrast to the recent research on more implicit parallel programming models such as Chapel, OpenMP and OpenCL, which promise to provide significantly more flexibility at the cost of reimplementing significant portions of the application. We are developing CoMPI, a novel compiler-driven approach to enable existing MPI applications to scale to exascale systems with minimal modifications that can be made incrementally over the application's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.« less
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
The New Southern FIA Data Compilation System
V. Clark Baldwin; Larry Royer
2001-01-01
In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...
Solid Waste Management Available Information Materials. Total Listing 1966-1976.
ERIC Educational Resources Information Center
Larsen, Julie L.
This publication is a compiled and indexed bibliography of solid waste management documents produced in the last ten years. This U.S. Environmental Protection Agency (EPA) publication is compiled from the Office of Solid Waste Management Programs (OSWMP) publications and the National Technical Information Service (NTIS) reports. Included are…
An Ada programming support environment
NASA Technical Reports Server (NTRS)
Tyrrill, AL; Chan, A. David
1986-01-01
The toolset of an Ada Programming Support Environment (APSE) being developed at North American Aircraft Operations (NAAO) of Rockwell International, is described. The APSE is resident on three different hosts and must support developments for the hosts and for embedded targets. Tools and developed software must be freely portable between the hosts. The toolset includes the usual editors, compilers, linkers, debuggers, configuration magnagers, and documentation tools. Generally, these are being supplied by the host computer vendors. Other tools, for example, pretty printer, cross referencer, compilation order tool, and management tools were obtained from public-domain sources, are implemented in Ada and are being ported to the hosts. Several tools being implemented in-house are of interest, these include an Ada Design Language processor based on compilable Ada. A Standalone Test Environment Generator facilitates test tool construction and partially automates unit level testing. A Code Auditor/Static Analyzer permits the Ada programs to be evaluated against measures of quality. An Ada Comment Box Generator partially automates generation of header comment boxes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kou, Stephen; Palsberg, Jens; Brooks, Jeffrey
Consumer electronics today such as cell phones often have one or more low-power FPGAs to assist with energy-intensive operations in order to reduce overall energy consumption and increase battery life. However, current techniques for programming FPGAs require people to be specially trained to do so. Ideally, software engineers can more readily take advantage of the benefits FPGAs offer by being able to program them using their existing skills, a common one being object-oriented programming. However, traditional techniques for compiling object-oriented languages are at odds with todays FPGA tools, which support neither pointers nor complex data structures. Open until now ismore » the problem of compiling an object-oriented language to an FPGA in a way that harnesses this potential for huge energy savings. In this paper, we present a new compilation technique that feeds into an existing FPGA tool chain and produces FPGAs with up to almost an order of magnitude in energy savings compared to a low-power microprocessor while still retaining comparable performance and area usage.« less
A new model for programming software in body sensor networks.
de A Barbosa, Talles M G; Sene, Iwens G; da Rocha, Adson F; de O Nascimento, Francisco A A; Carvalho, Joao L A; Carvalho, Hervaldo S
2007-01-01
A Body Sensor Network (BSN) must be designed to work autonomously. On the other hand, BSNs need mechanisms that allow changes in their behavior in order to become a clinically useful tool. The purpose of this paper is to present a new programming model that will be useful for programming BSN sensor nodes. This model is based on an intelligent intermediate-level compiler. The main purpose of the proposed compiler is to increase the efficiency in system use, and to increase the lifetime of the application, considering its requirements, hardware possibilities and specialist knowledge. With this model, it is possible to maintain the autonomous operation capability of the BSN and still offer tools that allow users with little grasp on programming techniques to program these systems.
In Search of Unicorns and Effective Teacher Education.
ERIC Educational Resources Information Center
Watts, Doyle
1985-01-01
Describes (1) characteristics of effective preservice teacher preparation programs, (2) characteristics of teachers who have compiled an effective program, and (3) undesirable qualities of teachers not likely to emerge in graduates of effective programs. (EL)
Guidelines for development structured FORTRAN programs
NASA Technical Reports Server (NTRS)
Earnest, B. M.
1984-01-01
Computer programming and coding standards were compiled to serve as guidelines for the uniform writing of FORTRAN 77 programs at NASA Langley. Software development philosophy, documentation, general coding conventions, and specific FORTRAN coding constraints are discussed.
Usefulness of Compile-Time Restructuring of LGDF Programs in Throughput- Critical Applications
1993-09-01
efficiency of the sufers . Ma overhead can be reduced effecively by using the node and an: attributes of the data flow graph at ccunpie-time to...intolerable delays and insufficient buffer space, especiall underbhigh loads. A. THESIS SCOPE AND CONTRIB~tMON The focus of this work is on compile-time
ERIC Educational Resources Information Center
Astin, Helen S.; Cross, Patricia H.
Data tables are compiled on the characteristics of black freshmen entering a representative sanple of 393 predominately black and predominately white academic institutions. Using a ten percent random subsample of original data compiled by Alexander W. Astin for the Cooperative Institutional Research program, the researchers present extensive…
The dc power circuits: A compilation
NASA Technical Reports Server (NTRS)
1972-01-01
A compilation of reports concerning power circuits is presented for the dissemination of aerospace information to the general public as part of the NASA Technology Utilization Program. The descriptions for the electronic circuits are grouped as follows: dc power supplies, power converters, current-voltage power supply regulators, overload protection circuits, and dc constant current power supplies.
ERIC Educational Resources Information Center
Danaher, Joan; Goode, Sue; Lazara, Alex
2007-01-01
"Part C Updates" is a compilation of information on various aspects of the Early Intervention Program for Infants and Toddlers with Disabilities (Part C) of the Individuals with Disabilities Education Act (IDEA). This is the ninth volume in a series of compilations, which included two editions of Part H Updates, the former name of the…
Computer programs: Information retrieval and data analysis, a compilation
NASA Technical Reports Server (NTRS)
1972-01-01
The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.
1992-01-01
area end basic io types; 11.3.3 TEXT 10 -- Date 31 October 1983 -- Programmer Soeren Prehn (, Knud Joergen Kirkegaard) -- Project Portable Ada...Programmer Peter Haff (, Soeren Prehn , Knud Joergen Kirkegaard) -- Project Portable Ada Programming System -- Module SEQIOS.ADA -- Description...Peter Haff (,Soeren Prehn , Knud Joergen Kirkegaard) -- Project Portable Ada Programming System -- Module DIR IO.ADA -- Description Specification of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1990-08-01
This is the eighth compilation of annual reports for the Navy's ELF Communications Systems Ecological Monitoring Program. The reports document the progress of eight studies performed during 1989 near the Naval Radio Transmitting Facility -- Republic, Michigan. The purpose of the monitoring is to determine whether electromagnetic fields produced by the ELF Communications System will affect resident biota or their ecological relationships. Soil Amoeba: Arthropoda and Earthworms: Pollinating Insects: Small Mammals and Nesting Birds.
Using Coarrays to Parallelize Legacy Fortran Applications: Strategy and Case Study
Radhakrishnan, Hari; Rouson, Damian W. I.; Morris, Karla; ...
2015-01-01
This paper summarizes a strategy for parallelizing a legacy Fortran 77 program using the object-oriented (OO) and coarray features that entered Fortran in the 2003 and 2008 standards, respectively. OO programming (OOP) facilitates the construction of an extensible suite of model-verification and performance tests that drive the development. Coarray parallel programming facilitates a rapid evolution from a serial application to a parallel application capable of running on multicore processors and many-core accelerators in shared and distributed memory. We delineate 17 code modernization steps used to refactor and parallelize the program and study the resulting performance. Our initial studies were donemore » using the Intel Fortran compiler on a 32-core shared memory server. Scaling behavior was very poor, and profile analysis using TAU showed that the bottleneck in the performance was due to our implementation of a collective, sequential summation procedure. We were able to improve the scalability and achieve nearly linear speedup by replacing the sequential summation with a parallel, binary tree algorithm. We also tested the Cray compiler, which provides its own collective summation procedure. Intel provides no collective reductions. With Cray, the program shows linear speedup even in distributed-memory execution. We anticipate similar results with other compilers once they support the new collective procedures proposed for Fortran 2015.« less
Intergenerational Education: Young and Old Learning Together.
ERIC Educational Resources Information Center
Ginnane, Patrick
A compilation of descriptions of 8 intergenerational educational programs, this report emphasizes the positive relationship between children and older people. These programs were selected to highlight differing program components, administrative structures, and funding bases. Two programs involve children visiting nursing homes. Elders work in…
Emerge - A Python environment for the modeling of subsurface transfers
NASA Astrophysics Data System (ADS)
Lopez, S.; Smai, F.; Sochala, P.
2014-12-01
The simulation of subsurface mass and energy transfers often relies on specific codes that were mainly developed using compiled languages which usually ensure computational efficiency at the expense of relatively long development times and relatively rigid software. Even if a very detailed, possibly graphical, user-interface is developed the core numerical aspects are rarely accessible and the smallest modification will always need a compilation step. Thus, user-defined physical laws or alternative numerical schemes may be relatively difficult to use. Over the last decade, Python has emerged as a popular and widely used language in the scientific community. There already exist several libraries for the pre and post-treatment of input and output files for reservoir simulators (e.g. pytough). Development times in Python are considerably reduced compared to compiled languages, and programs can be easily interfaced with libraries written in compiled languages with several comprehensive numerical libraries that provide sequential and parallel solvers (e.g. PETSc, Trilinos…). The core objective of the Emerge project is to explore the possibility to develop a modeling environment in full Python. Consequently, we are developing an open python package with the classes/objects necessary to express, discretize and solve the physical problems encountered in the modeling of subsurface transfers. We heavily relied on Python to have a convenient and concise way of manipulating potentially complex concepts with a few lines of code and a high level of abstraction. Our result aims to be a friendly numerical environment targeting both numerical engineers and physicist or geoscientists with the possibility to quickly specify and handle geometries, arbitrary meshes, spatially or temporally varying properties, PDE formulations, boundary conditions…
Compendium of student papers : 2008 Undergraduate Transportation Scholars Program.
DOT National Transportation Integrated Search
2008-08-01
This report is a compilation of research papers written by students participating in the 2008 Undergraduate : Transportation Scholars Program. The ten-week summer program, now in its eighteenth year, provides : undergraduate students in Civil Enginee...
Statewide Transportation Improvement Program 1997-2001
DOT National Transportation Integrated Search
1996-07-01
The Utah Department of Transportation's Statewide Transportation Improvement Plan (STIP) is a five-year program of highway and transit projects for the State of Utah. It is a compilation of projects utilizing various federal and state funding program...
Compendium of student papers : 2009 undergraduate transportation engineering fellows program.
DOT National Transportation Integrated Search
2009-10-01
This report is a compilation of research papers written by students participating in the 2009 Undergraduate : Transportation Scholars Program. The ten-week summer program, now in its nineteenth year, provides : undergraduate students in Civil Enginee...
Computer programs: Electronic circuit design criteria: A compilation
NASA Technical Reports Server (NTRS)
1973-01-01
A Technology Utilization Program for the dissemination of information on technological developments which have potential utility outside the aerospace community is presented. The 21 items reported herein describe programs that are applicable to electronic circuit design procedures.
Starck, Patricia L; Love, Karen; McPherson, Robert
2008-01-01
In recent years, the focus has been on increasing the number of registered nurse (RN) graduates. Numerous states have initiated programs to increase the number and quality of students entering nursing programs, and to expand the capacity of their programs to enroll additional qualified students. However, little attention has been focused on an equally, if not more, effective method for increasing the number of RNs produced-increasing the graduation rate of students enrolling. This article describes a project that undertook the task of compiling graduation data for 15 entry-level programs, standardizing terms and calculations for compiling the data, and producing a regional report on graduation rates of RN students overall and by type of program. Methodology is outlined in this article. This effort produced results that were surprising to program deans and directors and is expected to produce greater collaborative efforts to improve these rates both locally and statewide.
Aspect-Oriented Monitoring of C Programs
NASA Technical Reports Server (NTRS)
Havelund, Klaus; VanWyk, Eric
2008-01-01
The paper presents current work on extending ASPECTC with state machines, resulting in a framework for aspect-oriented monitoring of C programs. Such a framework can be used for testing purposes, or it can be part of a fault protection strategy. The long term goal is to explore the synergy between the fields of runtime verification, focused on program monitoring, and aspect-oriented programming, focused on more general program development issues. The work is inspired by the observation that most work in this direction has been done for JAVA, partly due to the lack of easily accessible extensible compiler frameworks for C. The work is performed using the SILVER extensible attribute grammar compiler framework, in which C has been defined as a host language. Our work consists of extending C with ASPECTC, and subsequently to extend ASPECTC with state machines.
A powerful graphical pulse sequence programming tool for magnetic resonance imaging.
Jie, Shen; Ying, Liu; Jianqi, Li; Gengying, Li
2005-12-01
A powerful graphical pulse sequence programming tool has been designed for creating magnetic resonance imaging (MRI) applications. It allows rapid development of pulse sequences in graphical mode (allowing for the visualization of sequences), and consists of three modules which include a graphical sequence editor, a parameter management module and a sequence compiler. Its key features are ease to use, flexibility and hardware independence. When graphic elements are combined with a certain text expressions, the graphical pulse sequence programming is as flexible as text-based programming tool. In addition, a hardware-independent design is implemented by using the strategy of two step compilations. To demonstrate the flexibility and the capability of this graphical sequence programming tool, a multi-slice fast spin echo experiment is performed on our home-made 0.3 T permanent magnet MRI system.
Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language
NASA Astrophysics Data System (ADS)
Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud
2017-08-01
This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busby, Lee
IREP reads external program input using the Lua C Library, organizes the input into native language structures, and shares those structures among compiled program objects written in either (or both) C/C++ or Fortran
76 FR 66284 - Wind and Water Power Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-26
... projects and the overall Water Power Program research portfolio, a report will be compiled by DOE, which... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy Wind and Water Power Program... projects. The 2011 Wind and Water Power Program, Water Power Peer Review Meeting will review the Program's...
Enhancement to Hitran to Support the NASA EOS Program
NASA Technical Reports Server (NTRS)
Kirby, Kate P.; Rothman, Laurence S.
1998-01-01
The HITRAN molecular database has been enhanced with the object of providing improved capabilities for the EOS program scientists. HITRAN itself is the database of high-resolution line parameters of gaseous species expected to be observed by the EOS program in its remote sensing activities. The database is part of a larger compilation that includes IR cross-sections, aerosol indices of refraction, and software for filtering and plotting portions of the database. These properties have also been improved. The software has been advanced in order to work on multiple platforms. Besides the delivery of the compilation on CD-ROM, the effort has been directed toward making timely access of data and software on the world wide web.
Enhancement to HITRAN to Support the NASA EOS Program
NASA Technical Reports Server (NTRS)
Kirby, Kate P.; Rothman, Laurence S.
1999-01-01
The HITRAN molecular database has been enhanced with the object of providing improved capabilities for the EOS program scientists. HITRAN itself is the database of high-resolution line parameters of gaseous species expected to be observed by the EOS program in its remote sensing activities. The database is part of a larger compilation that includes IR cross-sections, aerosol indices of refraction, and software for filtering and plotting portions of the database. These properties have also been improved. The software has been advanced in order to work on multiple platforms. Besides the delivery of the compilation on CD-ROM, the effort has been directed toward making timely access of data and software on the world wide web.
A survey of compiler optimization techniques
NASA Technical Reports Server (NTRS)
Schneck, P. B.
1972-01-01
Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.
ERIC Educational Resources Information Center
National School Resource Network, Washington, DC.
This program resource guide is a compilation of all programs and projects on preventing school violence and vandalism referenced in National School Resource Network (NSRN) materials. The programs cited are described in NSRN trainers' guides, participant guides, technical assistance bulletins, an "Aha" listing, and a compendium. The index is…
Compendium of student papers : 2010 undergraduate transportation scholars program.
DOT National Transportation Integrated Search
2011-06-01
This report is a compilation of research papers written by students participating in the 2010 Undergraduate : Transportation Scholars Program. The 10-week summer program, now in its 20th year, provides : undergraduate students in Civil Engineering th...
Compendium of student papers : 2012 undergraduate transportation scholars program.
DOT National Transportation Integrated Search
2013-05-01
This report is a compilation of research papers written by students participating in the 2012 Undergraduate : Transportation Scholars Program. The 10-week summer program, now in its 22nd year, provides : undergraduate students in Civil Engineering th...
Compendium of student papers : 2011 undergraduate transportation scholars program.
DOT National Transportation Integrated Search
2012-05-01
This report is a compilation of research papers written by students participating in the 2011 Undergraduate : Transportation Scholars Program. The 10-week summer program, now in its 21st year, provides : undergraduate students in Civil Engineering th...
Compendium of student papers : 2013 undergraduate transportation scholars program.
DOT National Transportation Integrated Search
2013-11-01
This report is a compilation of research papers written by students participating in the 2013 Undergraduate Transportation Scholars Program. The 10-week summer program, now in its 23nd year, provides undergraduate students in Civil Engineering the op...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1989-11-16
This VSR documents the results of the validation testing performed on an Ada compiler. Testing was carried out for the following purposes: To attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; To attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; and To determine that the implementation-dependent behavior is allowed by the Ada Standard. Testing of this compiler was conducted by SofTech, Inc. under the direction of he AVF according to procedures established by the Ada Joint Program Office and administered bymore » the Ada Validation Organization (AVO). On-side testing was completed 16 November 1989 at Aloha OR.« less
Power-Aware Compiler Controllable Chip Multiprocessor
NASA Astrophysics Data System (ADS)
Shikano, Hiroaki; Shirako, Jun; Wada, Yasutaka; Kimura, Keiji; Kasahara, Hironori
A power-aware compiler controllable chip multiprocessor (CMP) is presented and its performance and power consumption are evaluated with the optimally scheduled advanced multiprocessor (OSCAR) parallelizing compiler. The CMP is equipped with power control registers that change clock frequency and power supply voltage to functional units including processor cores, memories, and an interconnection network. The OSCAR compiler carries out coarse-grain task parallelization of programs and reduces power consumption using architectural power control support and the compiler's power saving scheme. The performance evaluation shows that MPEG-2 encoding on the proposed CMP with four CPUs results in 82.6% power reduction in real-time execution mode with a deadline constraint on its sequential execution time. Furthermore, MP3 encoding on a heterogeneous CMP with four CPUs and four accelerators results in 53.9% power reduction at 21.1-fold speed-up in performance against its sequential execution in the fastest execution mode.
A translator writing system for microcomputer high-level languages and assemblers
NASA Technical Reports Server (NTRS)
Collins, W. R.; Knight, J. C.; Noonan, R. E.
1980-01-01
In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.
A First Look at Novice Compilation Behaviour Using BlueJ
ERIC Educational Resources Information Center
Jadud, Matthew C.
2005-01-01
Syntactically correct code does not fall from the sky; the process that leads to a student's first executable program is not well understood. At the University of Kent we have begun to explore the "compilation behaviours" of novice programmers, or the behaviours that students exhibit while authoring code; in our initial study, we have…
Final report: Compiled MPI. Cost-Effective Exascale Application Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, William Douglas
2015-12-21
This is the final report on Compiled MPI: Cost-Effective Exascale Application Development, and summarizes the results under this project. The project investigated runtime enviroments that improve the performance of MPI (Message-Passing Interface) programs; work at Illinois in the last period of this project looked at optimizing data access optimizations expressed with MPI datatypes.
Map and database of Quaternary faults in Venezuela and its offshore regions
Audemard, F.A.; Machette, M.N.; Cox, J.W.; Dart, R.L.; Haller, K.M.
2000-01-01
As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.The project is sponsored by the International Lithosphere Program and funded by the USGS’s National Earthquake Hazards Reduction Program. The primary elements of the project are general supervision and interpretation of geologic/tectonic information, data compilation and entry for fault catalog, database design and management, and digitization and manipulation of data in †ARCINFO. For the compilation of data, we engaged experts in Quaternary faulting, neotectonics, paleoseismology, and seismology.
Status and future of extraterrestrial mapping programs
NASA Technical Reports Server (NTRS)
Batson, R. M.
1981-01-01
Extensive mapping programs have been completed for the Earth's Moon and for the planet Mercury. Mars, Venus, and the Galilean satellites of Jupiter (Io, Europa, Ganymede, and Callisto), are currently being mapped. The two Voyager spacecraft are expected to return data from which maps can be made of as many as six of the satellites of Saturn and two or more of the satellites of Uranus. The standard reconnaissance mapping scales used for the planets are 1:25,000,000 and 1:5,000,000; where resolution of data warrants, maps are compiled at the larger scales of 1:2,000,000, 1:1,000,000 and 1:250,000. Planimetric maps of a particular planet are compiled first. The first spacecraft to visit a planet is not designed to return data from which elevations can be determined. As exploration becomes more intensive, more sophisticated missions return photogrammetric and other data to permit compilation of contour maps.
NASA Technical Reports Server (NTRS)
Heinmiller, J. P.
1971-01-01
This document is the programmer's guide for the GNAT computer program developed under MSC/TRW Task 705-2, Apollo cryogenic storage system analysis, subtask 2, is reported. Detailed logic flow charts and compiled program listings are provided for all program elements.
Nuclear Education and Training Programs of Potential Interest to Utilities.
ERIC Educational Resources Information Center
Atomic Energy Commission, Washington, DC.
This compilation of education and training programs related to nuclear applications in electric power generation covers programs conducted by nuclear reactor vendors, public utilities, universities, technical institutes, and community colleges, which were available in December 1968. Several training-program consultant services are also included.…
Reports of planetary geology and geophysics program, 1989
NASA Technical Reports Server (NTRS)
Holt, Henry (Editor)
1990-01-01
Abstracts of reports from Principal Investigators of NASA's Planetary Geology and Geophysics Program are compiled. The research conducted under this program during 1989 is summarized. Each report includes significant accomplishments in the area of the author's funded grant or contract.
ERIC Educational Resources Information Center
Tesler, Lawrence G.
1984-01-01
Discusses the nature of programing languages, considering the features of BASIC, LOGO, PASCAL, COBOL, FORTH, APL, and LISP. Also discusses machine/assembly codes, the operation of a compiler, and trends in the evolution of programing languages (including interest in notational systems called object-oriented languages). (JN)
The Fault Tree Compiler (FTC): Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Martensen, Anna L.
1989-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.
PolyCheck: Dynamic Verification of Iteration Space Transformations on Affine Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Wenlei; Krishnamoorthy, Sriram; Pouchet, Louis-noel
2016-01-11
High-level compiler transformations, especially loop transformations, are widely recognized as critical optimizations to restructure programs to improve data locality and expose parallelism. Guaranteeing the correctness of program transformations is essential, and to date three main approaches have been developed: proof of equivalence of affine programs, matching the execution traces of programs, and checking bit-by-bit equivalence of the outputs of the programs. Each technique suffers from limitations in either the kind of transformations supported, space complexity, or the sensitivity to the testing dataset. In this paper, we take a novel approach addressing all three limitations to provide an automatic bug checkermore » to verify any iteration reordering transformations on affine programs, including non-affine transformations, with space consumption proportional to the original program data, and robust to arbitrary datasets of a given size. We achieve this by exploiting the structure of affine program control- and data-flow to generate at compile-time lightweight checker code to be executed within the transformed program. Experimental results assess the correctness and effectiveness of our method, and its increased coverage over previous approaches.« less
A distributed programming environment for Ada
NASA Technical Reports Server (NTRS)
Brennan, Peter; Mcdonnell, Tom; Mcfarland, Gregory; Timmins, Lawrence J.; Litke, John D.
1986-01-01
Despite considerable commercial exploitation of fault tolerance systems, significant and difficult research problems remain in such areas as fault detection and correction. A research project is described which constructs a distributed computing test bed for loosely coupled computers. The project is constructing a tool kit to support research into distributed control algorithms, including a distributed Ada compiler, distributed debugger, test harnesses, and environment monitors. The Ada compiler is being written in Ada and will implement distributed computing at the subsystem level. The design goal is to provide a variety of control mechanics for distributed programming while retaining total transparency at the code level.
NASA Technical Reports Server (NTRS)
Fincannon, James
2009-01-01
This compilation of trade studies performed from 2005 to 2006 addressed a number of power system design issues for the Constellation Program Extravehicular Activity Spacesuit. Spacesuits were required for spacewalks and in-space activities as well as lunar and Mars surface operations. The trades documented here considered whether solar power was feasible for spacesuits, whether spacesuit power generation should be a distributed or a centralized function, whether self-powered in-space spacesuits were better than umbilically powered ones, and whether the suit power system should be recharged in place or replaced.
Data Collection Answers - SEER Registrars
Read clarifications to existing coding rules, which should be implemented immediately. Data collection experts from American College of Surgeons Commission on Cancer, CDC National Program of Cancer Registries, and SEER Program compiled these answers.
Reports of planetary geology program, 1980. [Bibliography
NASA Technical Reports Server (NTRS)
Holt, H. E. (Compiler); Kosters, E. C. (Compiler)
1980-01-01
This is a compilation of abstracts of reports which summarize work conducted in the Planetary Geology Program. Each report reflects significant accomplishments within the area of the author's funded grant or contract.
Current Trends in Associate Degree Nursing Programs.
ERIC Educational Resources Information Center
Blackstone, Elaine Grant
This study was designed to ascertain current trends in associate degree nursing programs and to discover innovative ideas and techniques which could be applied to the existing program at Miami-Dade Community College (Florida). Data was compiled from interviews with representatives of ten associate degree nursing programs in six states. Information…
Program CALIB. [for computing noise levels for helicopter version of S-191 filter wheel spectrometer
NASA Technical Reports Server (NTRS)
Mendlowitz, M. A.
1973-01-01
The program CALIB, which was written to compute noise levels and average signal levels of aperture radiance for the helicopter version of the S-191 filter wheel spectrometer is described. The program functions, and input description are included along with a compiled program listing.
Oleanna Math Program Smorgasbord (I).
ERIC Educational Resources Information Center
Coole, Walter A.
This packet is a compilation of short units and quick review assignments used in the Oleanna Math Program at Skagit Valley College (Washington). This math program is taught in an auto-tutorial learning laboratory situation with programmed materials. Each unit of study is contained on a 5" by 8" card, which describes performance…
Federal Programs of Assistance to American Indians.
ERIC Educational Resources Information Center
Jones, Richard S.
Comprehensive descriptions of all federal programs which specifically benefit American Indians are compiled in this document which utilizes information contributed by government agencies and departments in 1974. The format of each program includes: (1) the name, nature, and purpose of the program; (2) eligibility requirements; (3) how to apply…
Global Ground Motion Prediction Equations Program | Just another WordPress
Motion Task 2: Compile and Critically Review GMPEs Task 3: Select or Derive a Global Set of GMPEs Task 6 : Design the Specifications to Compile a Global Database of Soil Classification Task 5: Build a Database of Update on PEER's Global GMPEs Project from recent workshop in Turkey Posted on June 11, 2012 During May
NASA Technical Reports Server (NTRS)
Shoultz, M. B.; Mcclurken, E. W., Jr.
1975-01-01
A compilation of NASA research efforts in the area of space environmental effects on materials and processes is presented. Topics considered are: (1) fluid mechanics and heat transfer; (2) crystal growth and containerless melts; (3) acoustics; (4) glass and ceramics; (5) electrophoresis; (6) welding; and (7) exobiology.
Electronic control circuits: A compilation
NASA Technical Reports Server (NTRS)
1973-01-01
A compilation of technical R and D information on circuits and modular subassemblies is presented as a part of a technology utilization program. Fundamental design principles and applications are given. Electronic control circuits discussed include: anti-noise circuit; ground protection device for bioinstrumentation; temperature compensation for operational amplifiers; hybrid gatling capacitor; automatic signal range control; integrated clock-switching control; and precision voltage tolerance detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P
MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less
DOT National Transportation Integrated Search
1997-11-01
DOT uses a two-phase process for selecting and funding transportation : projects for the five discretionary programs we reviewed. In the first : phase, FHWA program staff in the field and headquarters compile and : evaluate the applications that stat...
Computer enhancement through interpretive techniques
NASA Technical Reports Server (NTRS)
Foster, G.; Spaanenburg, H. A. E.; Stumpf, W. E.
1972-01-01
The improvement in the usage of the digital computer through the use of the technique of interpretation rather than the compilation of higher ordered languages was investigated by studying the efficiency of coding and execution of programs written in FORTRAN, ALGOL, PL/I and COBOL. FORTRAN was selected as the high level language for examining programs which were compiled, and A Programming Language (APL) was chosen for the interpretive language. It is concluded that APL is competitive, not because it and the algorithms being executed are well written, but rather because the batch processing is less efficient than has been admitted. There is not a broad base of experience founded on trying different implementation strategies which have been targeted at open competition with traditional processing methods.
Source Reduction Assistance Grant Program Guidance for Applicants
The following FAQs were compiled to benefit prospective applicants seeking to apply for grant s or cooperative agreement funding under the Environmental Protection Agency’s (EPA) Source Reduction Assistance (SRA) Grant Program.
Mathematical computer programs: A compilation
NASA Technical Reports Server (NTRS)
1972-01-01
Computer programs, routines, and subroutines for aiding engineers, scientists, and mathematicians in direct problem solving are presented. Also included is a group of items that affords the same users greater flexibility in the use of software.
The national land use data program of the US Geological Survey
NASA Technical Reports Server (NTRS)
Anderson, J. R.; Witmer, R. E.
1975-01-01
The Land Use Data and Analysis (LUDA) Program which provides a systematic and comprehensive collection and analysis of land use and land cover data on a nationwide basis is described. Maps are compiled at about 1:125,000 scale showing present land use/cover at Level II of a land use/cover classification system developed by the U.S. Geological Survey in conjunction with other Federal and state agencies and other users. For each of the land use/cover maps produced at 1:125,000 scale, overlays are also compiled showing Federal land ownership, river basins and subbasins, counties, and census county subdivisions. The program utilizes the advanced technology of the Special Mapping Center of the U.S. Geological Survey, high altitude NASA photographs, aerial photographs acquired for the USGS Topographic Division's mapping program, and LANDSAT data in complementary ways.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-02-01
The Electric Power Research Institute (EPRI) has been studying the feasibility of a Low Salinity Hydrothermal Demonstration Plant as part of its Geothermal Energy Program. The Heber area of the Imperial Valley was selected as one of the candidate geothermal reservoirs. Documentation of the environmental conditions presently existing in the Heber area is required for assessment of environmental impacts of future development. An environmental baseline data acquisition program to compile available data on the environment of the Heber area is reported. The program included a review of pertinent existing literature, interviews with academic, governmental and private entities, combined with fieldmore » investigations and meteorological monitoring to collect primary data. Results of the data acquisition program are compiled in terms of three elements: the physical, the biological and socioeconomic settings.« less
The BLAZE language: A parallel language for scientific programming
NASA Technical Reports Server (NTRS)
Mehrotra, P.; Vanrosendale, J.
1985-01-01
A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.
The Katydid system for compiling KEE applications to Ada
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Bock, Conrad; Feldman, Roy
1990-01-01
Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.
FX-87 performance measurements: data-flow implementation. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammel, R.T.; Gifford, D.K.
1988-11-01
This report documents a series of experiments performed to explore the thesis that the FX-87 effect system permits a compiler to schedule imperative programs (i.e., programs that may contain side-effects) for execution on a parallel computer. The authors analyze how much the FX-87 static effect system can improve the execution times of five benchmark programs on a parallel graph interpreter. Three of their benchmark programs do not use side-effects (factorial, fibonacci, and polynomial division) and thus did not have any effect-induced constraints. Their FX-87 performance was comparable to their performance in a purely functional language. Two of the benchmark programsmore » use side effects (DNA sequence matching and Scheme interpretation) and the compiler was able to use effect information to reduce their execution times by factors of 1.7 to 5.4 when compared with sequential execution times. These results support the thesis that a static effect system is a powerful tool for compilation to multiprocessor computers. However, the graph interpreter we used was based on unrealistic assumptions, and thus our results may not accurately reflect the performance of a practical FX-87 implementation. The results also suggest that conventional loop analysis would complement the FX-87 effect system« less
Towards programming languages for genetic engineering of living cells
Pedersen, Michael; Phillips, Andrew
2009-01-01
Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts. PMID:19369220
Towards programming languages for genetic engineering of living cells.
Pedersen, Michael; Phillips, Andrew
2009-08-06
Synthetic biology aims at producing novel biological systems to carry out some desired and well-defined functions. An ultimate dream is to design these systems at a high level of abstraction using engineering-based tools and programming languages, press a button, and have the design translated to DNA sequences that can be synthesized and put to work in living cells. We introduce such a programming language, which allows logical interactions between potentially undetermined proteins and genes to be expressed in a modular manner. Programs can be translated by a compiler into sequences of standard biological parts, a process that relies on logic programming and prototype databases that contain known biological parts and protein interactions. Programs can also be translated to reactions, allowing simulations to be carried out. While current limitations on available data prevent full use of the language in practical applications, the language can be used to develop formal models of synthetic systems, which are otherwise often presented by informal notations. The language can also serve as a concrete proposal on which future language designs can be discussed, and can help to guide the emerging standard of biological parts which so far has focused on biological, rather than logical, properties of parts.
Abid, Abdulbasit
2013-03-01
This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.
Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...
2015-07-13
Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less
NASA Technical Reports Server (NTRS)
Egebrecht, R. A.; Thorbjornsen, A. R.
1967-01-01
Digital computer programs determine steady-state performance characteristics of active and passive linear circuits. The ac analysis program solves the basic circuit parameters. The compiler program solves these circuit parameters and in addition provides a more versatile program by allowing the user to perform mathematical and logical operations.
Effective Vectorization with OpenMP 4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham
This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less
Solving Integer Programs from Dependence and Synchronization Problems
1993-03-01
DEFF.NSNE Solving Integer Programs from Dependence and Synchronization Problems Jaspal Subhlok March 1993 CMU-CS-93-130 School of Computer ScienceT IC...method Is an exact and efficient way of solving integer programming problems arising in dependence and synchronization analysis of parallel programs...7/;- p Keywords: Exact dependence tesing, integer programming. parallelilzng compilers, parallel program analysis, synchronization analysis Solving
Computer Calculation of Fire Danger
William A. Main
1969-01-01
This paper describes a computer program that calculates National Fire Danger Rating Indexes. fuel moisture, buildup index, and drying factor are also available. The program is written in FORTRAN and is usable on even the smallest compiler.
NASA Astrophysics Data System (ADS)
Jarecka, D.; Arabas, S.; Fijalkowski, M.; Gaynor, A.
2012-04-01
The language of choice for numerical modelling in geoscience has long been Fortran. A choice of a particular language and coding paradigm comes with different set of tradeoffs such as that between performance, ease of use (and ease of abuse), code clarity, maintainability and reusability, availability of open source compilers, debugging tools, adequate external libraries and parallelisation mechanisms. The availability of trained personnel and the scale and activeness of the developer community is of importance as well. We present a short comparison study aimed at identification and quantification of these tradeoffs for a particular example of an object oriented implementation of a parallel 2D-advection-equation solver in Python/NumPy, C++/Blitz++ and modern Fortran. The main angles of comparison will be complexity of implementation, performance of various compilers or interpreters and characterisation of the "added value" gained by a particular choice of the language. The choice of the numerical problem is dictated by the aim to make the comparison useful and meaningful to geoscientists. Python is chosen as a language that traditionally is associated with ease of use, elegant syntax but limited performance. C++ is chosen for its traditional association with high performance but even higher complexity and syntax obscurity. Fortran is included in the comparison for its widespread use in geoscience often attributed to its performance. We confront the validity of these traditional views. We point out how the usability of a particular language in geoscience depends on the characteristics of the language itself and the availability of pre-existing software libraries (e.g. NumPy, SciPy, PyNGL, PyNIO, MPI4Py for Python and Blitz++, Boost.Units, Boost.MPI for C++). Having in mind the limited complexity of the considered numerical problem, we present a tentative comparison of performance of the three implementations with different open source compilers including CPython and PyPy, Clang++ and GNU g++, and GNU gfortran.
A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, C; Quinlan, D; Panas, T
2010-01-25
OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is oftenmore » complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.« less
Summer Splash. 1988 Wisconsin Summer Library Program Manual. Bulletin No. 8230.
ERIC Educational Resources Information Center
Lamb, Donald K.; And Others
A compilation of materials contributed by and developed with the cooperation of Wisconsin librarians and Ohio's 1987 summer reading program, this planning manual provides guidelines for planning and promoting summer programs for young people by librarians in the state of Wisconsin. The theme of the program, "Summer Splash," is intended…
DOT National Transportation Integrated Search
1973-12-01
The contents are: Appendix B - Detailed flow diagrams - new systems cost program; Appendix C and D - Typical input and output data - new system cost program; Appendix E - Compiled listings - highway transit cost program; Appendix F and G - Typical in...
Bellman's GAP--a language and compiler for dynamic programming in sequence analysis.
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-03-01
Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman's GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. In Bellman's GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman's GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman's GAP as an implementation platform of 'real-world' bioinformatics tools. Bellman's GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics.
NASA Astrophysics Data System (ADS)
Berdychowski, Piotr P.; Zabolotny, Wojciech M.
2010-09-01
The main goal of C to VHDL compiler project is to make FPGA platform more accessible for scientists and software developers. FPGA platform offers unique ability to configure the hardware to implement virtually any dedicated architecture, and modern devices provide sufficient number of hardware resources to implement parallel execution platforms with complex processing units. All this makes the FPGA platform very attractive for those looking for efficient heterogeneous, computing environment. Current industry standard in development of digital systems on FPGA platform is based on HDLs. Although very effective and expressive in hands of hardware development specialists, these languages require specific knowledge and experience, unreachable for most scientists and software programmers. C to VHDL compiler project attempts to remedy that by creating an application, that derives initial VHDL description of a digital system (for further compilation and synthesis), from purely algorithmic description in C programming language. This idea itself is not new, and the C to VHDL compiler combines the best approaches from existing solutions developed over many previous years, with the introduction of some new unique improvements.
Modular implementation of a digital hardware design automation system
NASA Astrophysics Data System (ADS)
Masud, M.
An automation system based on AHPL (A Hardware Programming Language) was developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) illustration of how the compiler may be used to support several phases of design activities. Several new features were added to AHPL. These include: application-dependent parameters, mutliple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1)BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems.
State Compensatory Education Annual Report, 1982-83.
ERIC Educational Resources Information Center
Georgia State Dept. of Education, Atlanta. Office of Instructional Services.
This document compiles compensatory education program data submitted to the Georgia State Department by local school systems in their 1982-83 annual reports. The first section describes state administration of grant funds (i.e., appropriations bills, procedures for allocating funds, program plans, and program monitoring). Specifically mentioned…
Bibliography of Literature on Neuro-Linguistic Programming.
ERIC Educational Resources Information Center
McCormick, Donald W.
Two bibliographies on neurolinguistic programming are updates of an earlier literature review by the same compiler. The two lists contain citations of over 160 books, research reports, dissertations, journal articles, audio and video recordings, and research projects in progress on aspects of neurolinguistic programming. Appended notes suggest…
Vocational Instructional Program Advisory Committee Resource Guide.
ERIC Educational Resources Information Center
Rice, Eric; Buescher, Douglas A.
This guide is intended to provide assistance in developing, organizing, and operating vocational instructional program (VIP) advisory committees. It is designed to be useful for secondary or postsecondary programs that offer training for an occupation or cluster of occupations. The guide is a compilation of suggestions, illustrations, and…
Training Opportunities ...Access to Quality Education for a Brighter Future.
ERIC Educational Resources Information Center
Office of Education (DHEW), Washington, DC.
Compiled by the Spanish Speaking Program staff in its efforts to make educational opportunities available to Hispanic Americans, the directory provides information on 231 scholarships, fellowships, stipends, traineeships and other financial assistance programs. These programs are offered by Federal agencies, post secondary education institutions,…
ProjectQ: Compiling quantum programs for various backends
NASA Astrophysics Data System (ADS)
Haener, Thomas; Steiger, Damian S.; Troyer, Matthias
In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.
SAN JUAN BAY ESTUARY PROGRAM IMPLEMENTATION REVIEW ATTACHMENTS
A compilation of attachments referenced in the San Juan Bay Estuary Program Implementation Review (2004). Materials include, entity reports, water and sediment quality action plans, progress reports, correspondence with local municipalities and Puerto Rican governmental agencies,...
NASA Technical Reports Server (NTRS)
Martensen, Anna L.; Butler, Ricky W.
1987-01-01
The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
The BLAZE language - A parallel language for scientific programming
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1987-01-01
A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.
1987-06-03
F TADADS I 3- bi. I A - I EEOM "O.VPAGE:N(..4-A I . RE 12. GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER 4. TITLE (andSubtitle) 5. TYPE OF REPORT...reverse side if necessary and identify by block number) See Attached / 1 DD 10"m 1473 EDITION OF I NOV 65 IS OBSOLETE 1 JAN 73 S/N 0102-LF-014-6601...States Government (Ada Joint Program Office). i 4 AVF Control Number: AVF-VSR-8C.0787 87-01 -07-HAR Ada® COMPILER VALIDATION SUMMARY REPORT: Harris
1988 Washington State Program for Migrant Children's Education.
ERIC Educational Resources Information Center
de la Rosa, Raul
This comprehensive report on the Washington State program for migrant children's education was compiled by the state education department in order to comply with federal and state funding requirements. It is divided into four parts: (1) Federal Assistance Application; (2) Program Narrative; (3) Budget Information; and (4) Assurances. The program…
WinHPC System Programming | High-Performance Computing | NREL
Programming WinHPC System Programming Learn how to build and run an MPI (message passing interface (mpi.h) and library (msmpi.lib) are. To build from the command line, run... Start > Intel Software Development Tools > Intel C++ Compiler Professional... > C++ Build Environment for applications running
NASA Technical Reports Server (NTRS)
Mauldin, Lemuel E., III
1993-01-01
Travel Forecaster is menu-driven, easy-to-use computer program that plans, forecasts cost, and tracks actual vs. planned cost of business-related travel of division or branch of organization and compiles information into data base to aid travel planner. Ability of program to handle multiple trip entries makes it valuable time-saving device.
Computer-Assisted Instruction Guide.
ERIC Educational Resources Information Center
Entelek, Inc., Newburyport, MA.
Provided is a compilation of abstracts of currently available computer-assisted instructional (CAI) programs. The guide contains the specifications of all operational CAI programs that have come under the surveillance of ENTELEK's CAI Information Exchange since its establishment in 1965. A total of 226 CAI programs by 160 authors at 38 CAI centers…
ERIC Educational Resources Information Center
Rainer, John D., Ed.; Altshuler, Kenneth Z., Ed.
A compilation of presentations from a meeting on psychiatry and the deaf, the text includes the following discussions: background and history of the New York State mental health program for the deaf; an introduction to the program of the New York School for the Deaf; school psychiatric preventive programs; adjustment problems presented by a panel…
CAPSAS: Computer Assisted Program for the Selection of Appropriate Statistics.
ERIC Educational Resources Information Center
Shermis, Mark D.; Albert, Susan L.
A computer-assisted program has been developed for the selection of statistics or statistical techniques by both students and researchers. Based on Andrews, Klem, Davidson, O'Malley and Rodgers "A Guide for Selecting Statistical Techniques for Analyzing Social Science Data," this FORTRAN-compiled interactive computer program was…
10 CFR 602.5 - Epidemiology and Other Health Studies Financial Assistance Program.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., and use (including electromagnetic fields) in the United States and abroad; (6) Compilation... Financial Assistance Program. (a) DOE may issue under this part awards for research, education/training... (7) Other systems or activities enhancing these areas, as well as other program areas as may be...
Utilities for master source code distribution: MAX and Friends
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
MAX is a program for the manipulation of FORTRAN master source code (MSC). This is a technique by which one maintains one and only one master copy of a FORTRAN program under a program developing system, which for MAX is assumed to be VAX/VMS. The master copy is not intended to be directly compiled. Instead it must be pre-processed by MAX to produce compilable instances. These instances may correspond to different code versions (for example, double precision versus single precision), different machines (for example, IBM, CDC, Cray) or different operating systems (i.e., VAX/VMS versus VAX/UNIX). The advantage os using a master source is more pronounced in complex application programs that are developed and maintained over many years and are to be transported and executed on several computer environments. The version lag problem that plagues many such programs is avoided by this approach. MAX is complemented by several auxiliary programs that perform nonessential functions. The ensemble is collectively known as MAX and Friends. All of these programs, including MAX, are executed as foreign VAX/VMS commands and can easily be hidden in customized VMS command procedures.
Fluid-rock geochemical interaction for modelling calibration in geothermal exploration in Indonesia
NASA Astrophysics Data System (ADS)
Deon, Fiorenza; Barnhoorn, Auke; Lievens, Caroline; Ryannugroho, Riskiray; Imaro, Tulus; Bruhn, David; van der Meer, Freek; Hutami, Rizki; Sibarani, Besteba; Sule, Rachmat; Saptadij, Nenny; Hecker, Christoph; Appelt, Oona; Wilke, Franziska
2017-04-01
Indonesia with its large, but partially unexplored geothermal potential is one of the most interesting and suitable places in the world to conduct geothermal exploration research. This study focuses on geothermal exploration based on fluid-rock geochemistry/geomechanics and aims to compile an overview on geochemical data-rock properties from important geothermal fields in Indonesia. The research carried out in the field and in the laboratory is performed in the framework of the GEOCAP cooperation (Geothermal Capacity Building program Indonesia- the Netherlands). The application of petrology and geochemistry accounts to a better understanding of areas where operating power plants exist but also helps in the initial exploration stage of green areas. Because of their relevance and geological setting geothermal fields in Java, Sulawesi and the sedimentary basin of central Sumatra have been chosen as focus areas of this study. Operators, universities and governmental agencies will benefit from this approach as it will be applied also to new green-field terrains. By comparing the characteristic of the fluids, the alteration petrology and the rock geochemistry we also aim to contribute to compile an overview of the geochemistry of the important geothermal fields in Indonesia. At the same time the rock petrology and fluid geochemistry will be used as input data to model the reservoir fluid composition along with T-P parameters with the geochemical workbench PHREEQC. The field and laboratory data are mandatory for both the implementation and validation of the model results.
A portable approach for PIC on emerging architectures
NASA Astrophysics Data System (ADS)
Decyk, Viktor
2016-03-01
A portable approach for designing Particle-in-Cell (PIC) algorithms on emerging exascale computers, is based on the recognition that 3 distinct programming paradigms are needed. They are: low level vector (SIMD) processing, middle level shared memory parallel programing, and high level distributed memory programming. In addition, there is a memory hierarchy associated with each level. Such algorithms can be initially developed using vectorizing compilers, OpenMP, and MPI. This is the approach recommended by Intel for the Phi processor. These algorithms can then be translated and possibly specialized to other programming models and languages, as needed. For example, the vector processing and shared memory programming might be done with CUDA instead of vectorizing compilers and OpenMP, but generally the algorithm itself is not greatly changed. The UCLA PICKSC web site at http://www.idre.ucla.edu/ contains example open source skeleton codes (mini-apps) illustrating each of these three programming models, individually and in combination. Fortran2003 now supports abstract data types, and design patterns can be used to support a variety of implementations within the same code base. Fortran2003 also supports interoperability with C so that implementations in C languages are also easy to use. Finally, main codes can be translated into dynamic environments such as Python, while still taking advantage of high performing compiled languages. Parallel languages are still evolving with interesting developments in co-Array Fortran, UPC, and OpenACC, among others, and these can also be supported within the same software architecture. Work supported by NSF and DOE Grants.
2012-2013 Federal Pell Grant Program End-of-Year Report
ERIC Educational Resources Information Center
Office of Postsecondary Education, US Department of Education, 2013
2013-01-01
The Federal Pell Grant End-of-Year Report presents primary aspects of Federal Pell Grant Program activity for the 2012-2013 award year. This presentation is a compilation of quantitative program data assembled to offer insights into the changes to the Title IV applicant universe and the Federal Pell Grant Program. The Federal Pell Grant…
ERIC Educational Resources Information Center
Westheimer, Miriam, Ed.
Begun in Israel in 1960, the HIPPY (Home Instruction for Parents of Preschool Youngsters) program is a family support, parent-focused, early childhood literacy program. This book compiles 17 evaluation studies of the program, from researchers and practitioners in 7 countries. The studies are organized around five themes: exploring theoretical…
2011-2012 Federal Pell Grant Program End-of-Year Report
ERIC Educational Resources Information Center
Office of Postsecondary Education, US Department of Education, 2012
2012-01-01
The Federal Pell Grant End-of-Year Report presents primary aspects of Federal Pell Grant Program activity for the 2011-2012 award year. This presentation is a compilation of quantitative program data assembled to offer insights into the changes to the Title IV applicant universe and the Federal Pell Grant Program. The Federal Pell Grant…
ERIC Educational Resources Information Center
Danaher, Joan; Armijo, Caroline; Kraus, Robert; Festa, Cathy
This directory describes approximately 300 discretionary projects addressing the early childhood provisions of the Individuals with Disabilities Education Act (IDEA). It was compiled from four volumes separately published by the ERIC/OSEP Special Project. The discretionary grants and contracts authorized by the 1997 Amendments to the IDEA are…
ERIC Educational Resources Information Center
Danaher, Joan; Armijo, Caroline; Kraus, Robert; Festa, Cathy
This directory describes approximately 300 discretionary projects addressing the early childhood provisions of the Individuals with Disabilities Education Act (IDEA). It was compiled from four volumes separately published by the ERIC/OSEP Special Project. The discretionary grants and contracts authorized by the 1997 Amendments to the IDEA are…
Industrial Automation Mechanic Model Curriculum Project. Final Report.
ERIC Educational Resources Information Center
Toledo Public Schools, OH.
This document describes a demonstration program that developed secondary level competency-based instructional materials for industrial automation mechanics. Program activities included task list compilation, instructional materials research, learning activity packet (LAP) development, construction of lab elements, system implementation,…
Beal, Jacob; Lu, Ting; Weiss, Ron
2011-01-01
Background The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. Methodology/Principal Findings To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes () and latency of the optimized engineered gene networks. Conclusions/Significance Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems. PMID:21850228
Beal, Jacob; Lu, Ting; Weiss, Ron
2011-01-01
The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes (~ 50%) and latency of the optimized engineered gene networks. Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.
Usability Issues in the Design of Novice Programming Systems,
1996-08-01
lists this as a design principle for novice programming environments. In traditional compiled languages, beginners are also confused by the need to...programming task external knowledge that might interfere with correct under- standing of the language. Most beginner programming errors can be...language for text editing, but [Curtis 1988] found that a textual pseudocode and graphical flowcharts were both bet- ter than natural language in program
Suggested revisions to the annual highway safety work program in Virginia.
DOT National Transportation Integrated Search
1976-01-01
This paper describes some suggested revisions in the format of and method and procedures for compiling the Annual Highway Safety Work Program (AHSWP) required of the states by the National Highway Traffic Safety Administration (NHTSA). Prior to fisca...
Handbook for Citizenship Programs.
ERIC Educational Resources Information Center
Minnesota Literacy Council, St. Paul.
The handbook is a compilation of current instructional and legal materials for teachers and planners to use in developing citizenship programs. Sections address these topics: the citizenship/naturalization process (English waivers and exemptions, potential problems, and advantages and disadvantages of obtaining citizenship); applying for…
What Is a Programming Language?
ERIC Educational Resources Information Center
Wold, Allen
1983-01-01
Explains what a computer programing language is in general, the differences between machine language, assembler languages, and high-level languages, and the functions of compilers and interpreters. High-level languages mentioned in the article are: BASIC, FORTRAN, COBOL, PILOT, LOGO, LISP, and SMALLTALK. (EAO)
HAL/S-FC and HAL/S-360 compiler system program description
NASA Technical Reports Server (NTRS)
1976-01-01
The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.
Tectonic evaluation of the Nubian shield of Northeastern Sudan using thematic mapper imagery
NASA Technical Reports Server (NTRS)
1986-01-01
Bechtel is nearing completion of a one-year program that uses digitally enhanced LANDSAT Thematic Mapper (TM) data to compile the first comprehensive regional tectonic map of the Proterozoic Nubian Shield exposed in the northern Red Sea Hills of northeastern Sudan. The status of significant objectives of this study are given. Pertinent published and unpublished geologic literature and maps of the northern Red Sea Hills to establish the geologic framework of the region were reviewed. Thematic mapper imagery for optimal base-map enhancements was processed. Photo mosaics of enhanced images to serve as base maps for compilation of geologic information were completed. Interpretation of TM imagery to define and delineate structural and lithogologic provinces was completed. Geologic information (petrologic, and radiometric data) was compiled from the literature review onto base-map overlays. Evaluation of the tectonic evolution of the Nubian Shield based on the image interpretation and the compiled tectonic maps is continuing.
Use of a moss biomonitoring method to compile emission inventories for small-scale industries.
Varela, Z; Aboal, J R; Carballeira, A; Real, C; Fernández, J A
2014-06-30
We used a method of detecting small-scale pollution sources (DSSP) that involves measurement of the concentrations of elements in moss tissues, with the following aims: (i) to determine any common qualitative patterns of contaminant emissions for individual industrial sectors, (ii) to compare any such patterns with previously described patterns, and (iii) to compile an inventory of the metals and metalloids emitted by the industries considered. Cluster analysis revealed that there were no common patterns of emission associated with the industrial sectors, probably because of differences in production processes and in the types of fuel and raw materials. However, when these variables were shared by different factories, the concentrations of the elements in moss tissues enabled the factories to be grouped according to their emissions. We compiled a list of the metals and metalloids emitted by the factories under study and found that the DSSP method was satisfactory for this purpose in most cases (53 of 56). The method appears to be a useful tool for compiling contaminant inventories; it may also be useful for determining the efficacy of technical improvements aimed at reducing the industrial emission of contaminants and could be incorporated in environmental monitoring and control programmes. Copyright © 2014 Elsevier B.V. All rights reserved.
CFAE: The Casebook. Aid-to-Education Programs of Leading Business Concerns.
ERIC Educational Resources Information Center
Council for Financial Aid to Education, New York, NY.
Details of the aid-to-education programs of leading companies are compiled, revealing profiles of corporate purposes and policies in educational support and the principal types of support mechanisms being used to reflect corporate interests. Ways in which diverse corporate interests can be accommodated by program structure are shown. The casebook…
US Directory of Foreign Language Education Programs.
ERIC Educational Resources Information Center
Grosse, Christine Uber
The preparation of a directory of foreign language education programs was a response to the lack of an information source for location or curricular content of programs in foreign language pedagogy, and followed the lead of other associations in the United States and abroad in compiling such information. Despite having developed guidelines for…
Reports of planetary geology and geophysics program, 1988
NASA Technical Reports Server (NTRS)
Holt, Henry E. (Editor)
1989-01-01
This is a compilation of abstracts of reports from Principal Investigators of NASA's Planetary Geology and Geophysics Program, Office of Space Science and Applications. The purpose is to document in summary form research work conducted in this program during 1988. Each report reflects significant accomplishments within the area of the author's funded grant or contract.
Guidelines for the Podiatrist in the School Health Program.
ERIC Educational Resources Information Center
Pigg, R. Morgan, Jr.
1978-01-01
These guidelines were compiled to provide a model for integrating the services of the podiatrist into the health program of the school. The guidelines are intended to enable the podiatrist to supplement or complement the services of other medical specialists involved in the school health program. The scope of the guidelines encompasses the…
NASA Technical Reports Server (NTRS)
1988-01-01
A compilation of papers presented at this conference is given. The science dealing with materials and fluids and with fundamental studies in physics and chemistry in a low gravity environment is examined. Program assessments are made along with directions for progress in the future use of the space shuttle program.
How To: Evaluate Education Programs
ERIC Educational Resources Information Center
Fink, Arlene; Kosecoff, Jacqueline
This book presents a compilation of 28 issues of the newsletter, "How To Evaluate Education Programs" from the first one published in September 1977 through the issue of December 1979 on the topic of evaluating educational programs. The subject is covered in the following chapters: (1) How to Choose a Test; (2) How to Rate and Compare…
Teaching Evaluation: A Student-Run Consulting Firm
ERIC Educational Resources Information Center
Cundiff, Nicole; Nadler, Joel; Scribner, Shauna
2011-01-01
Applied Research Consultants (ARC) is a graduate student run consulting firm that provides experience to students in evaluation and consultation. An overview of this program has been compiled in order to serve as a model of a graduate training practicum that could be applied to similar programs or aid in the development of such programs. Key…
Reports of Planetary Geology and Geophysics Program, 1986
NASA Technical Reports Server (NTRS)
1987-01-01
Abstracts compiled from reports from Principal Investigators of the NASA Planetary Geology and Geophysics Program, Office of Space Science and Applications are presented. The purpose is to document in summary form work conducted in this program during 1986. Each report reflects significant accomplishments within the area of the author's funded grant or contract.
Peer Tutoring: A Guide to Program Design. Research and Development Series No. 260.
ERIC Educational Resources Information Center
Ashley, William L.; And Others
This publication presents guidelines for planning, implementing, and evaluating a peer tutoring program within a vocational setting. Chapter 1 discusses benefits of peer tutoring and presents a compilation of guidelines, suggestions, and examples for planning, developing, and evaluating a peer tutoring program. Tasks in each area--program…
Vocational Career Guide for Connecticut. Revised Edition--1975.
ERIC Educational Resources Information Center
University Research Inst. of Connecticut, Inc., Wallingford.
A guide to career training programs below the baccalaureate level in Connecticut was compiled from a survey of all schools offering identifiable programs of formal education for careers. Intended as a tool to assist students and guidance counselors in learning about the schools and programs, the guide does not recommend any specific schools or…
Reports of planetary geology and geophysics program, 1987
NASA Technical Reports Server (NTRS)
1988-01-01
This is a compilation of abstracts of reports from Principal Investigators of NASA's PLanetary Geology and Geophysics program, Office of Space Science and Applications. The purpose is to document in summary form research work conducted in this program during 1987. Each report reflects significant accomplishments in the area of the author's funded grant or contract.
Program Manual for Estimating Use and Related Statistics on Developed Recreation Sites
Gary L. Tyre; Gene R. Welch
1972-01-01
This manual includes documentation of four computer programs supporting subroutines for estimating use, visitor origin, patterns of use, and occupancy rates at developed recreation sites. The programs are written in Fortran IV and should be easily adapted to any computer arrangement have the capacity to compile this language.
ERIC Educational Resources Information Center
Denton, Jon J.; Davis, Trina J.; Capraro, Robert M.; Smith, Ben L.; Beason, Lynn; Graham, B. Diane; Strader, R. Arlen
2007-01-01
The purpose of this research was to determine whether particular biographic and academic characteristics would predict whether an applicant would matriculate into and successfully complete an online secondary teacher certification program for Texas public schools. Extensive biographic data on applicants were compiled into a program data base…
Implementation of a Compiler for the Functional Programming Language PHI.
1987-06-01
Chapter Three. 8 his acceptance speech for the 1977 ACM Turing Award, Backus criticized traditional programming languages and programming styles. He went... Knn "mfrn ~ i ptr ->type =type; :.f (f~ead :=NULL) { -st alreaay ex-s~ tracer f head; wnile (tracer->iink - NU:LL) rdent of >-sl tracer = : racer- Iik
NASA Astrophysics Data System (ADS)
Vukics, András
2012-06-01
C++QED is a versatile framework for simulating open quantum dynamics. It allows to build arbitrarily complex quantum systems from elementary free subsystems and interactions, and simulate their time evolution with the available time-evolution drivers. Through this framework, we introduce a design which should be generic for high-level representations of composite quantum systems. It relies heavily on the object-oriented and generic programming paradigms on one hand, and on the other hand, compile-time algorithms, in particular C++ template-metaprogramming techniques. The core of the design is the data structure which represents the state vectors of composite quantum systems. This data structure models the multi-array concept. The use of template metaprogramming is not only crucial to the design, but with its use all computations pertaining to the layout of the simulated system can be shifted to compile time, hence cutting on runtime. Program summaryProgram title: C++QED Catalogue identifier: AELU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:http://cpc.cs.qub.ac.uk/licence/aelu_v1_0.html. The C++QED package contains other software packages, Blitz, Boost and FLENS, all of which may be distributed freely but have individual license requirements. Please see individual packages for license conditions. No. of lines in distributed program, including test data, etc.: 597 974 No. of bytes in distributed program, including test data, etc.: 4 874 839 Distribution format: tar.gz Programming language: C++ Computer: i386-i686, x86_64 Operating system: In principle cross-platform, as yet tested only on UNIX-like systems (including Mac OS X). RAM: The framework itself takes about 60 MB, which is fully shared. The additional memory taken by the program which defines the actual physical system (script) is typically less than 1 MB. The memory storing the actual data scales with the system dimension for state-vector manipulations, and the square of the dimension for density-operator manipulations. This might easily be GBs, and often the memory of the machine limits the size of the simulated system. Classification: 4.3, 4.13, 6.2, 20 External routines: Boost C++ libraries (http://www.boost.org/), GNU Scientific Library (http://www.gnu.org/software/gsl/), Blitz++ (http://www.oonumerics.org/blitz/), Linear Algebra Package - Flexible Library for Efficient Numerical Solutions (http://flens.sourceforge.net/). Nature of problem: Definition of (open) composite quantum systems out of elementary building blocks [1]. Manipulation of such systems, with emphasis on dynamical simulations such as Master-equation evolution [2] and Monte Carlo wave-function simulation [3]. Solution method: Master equation, Monte Carlo wave-function method. Restrictions: Total dimensionality of the system. Master equation - few thousands. Monte Carlo wave-function trajectory - several millions. Unusual features: Because of the heavy use of compile-time algorithms, compilation of programs written in the framework may take a long time and much memory (up to several GBs). Additional comments: The framework is not a program, but provides and implements an application-programming interface for developing simulations in the indicated problem domain. Supplementary information: http://cppqed.sourceforge.net/. Running time: Depending on the magnitude of the problem, can vary from a few seconds to weeks.
Migration of legacy mumps applications to relational database servers.
O'Kane, K C
2001-07-01
An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.
ERIC Educational Resources Information Center
Danaher, Joan; Armijo, Caroline; Hipps, Cherie; Kraus, Robert
2004-01-01
This directory contains 262 discretionary projects addressing the early childhood provisions of the Individuals with Disabilities Education Act (IDEA). It was compiled from four volumes separately published by the ERIC/OSEP Special Project. The discretionary grants and contracts authorized by the 1997 Amendments to the IDEA are administered by the…
ONRASIA Scientific Information Bulletin, Volume 16, Number 1
1991-03-01
be expressed naturally in an and hence the programs produced by pline. They range from computing the algebraic language such as Fortran, these efforts...years devel- gram an iterative scheme to solve the function satisfies oping vectorizing compilers for Hitachi. problem. This is quite natural to do in...for it ential equations to be expressed in a on the plate, with 0,=1 at the outside to compile into efficient vectorizable natural mathematical syntax
Multiparadigm Design Environments
1992-01-01
following results: 1. New methods for programming in terms of conceptual models 2. Design of object-oriented languages 3. Compiler optimization and...experimented with object-based methods for programming directly in terms of conceptual models, object-oriented language design, computer program...expect the3e results to have a strong influence on future ,,j :- ...... L ! . . • a mm ammmml ll Illlll • l I 1 Conceptual Programming Conceptual
ERIC Educational Resources Information Center
Bitsko, Suzanne; And Others
A compilation and categorization of adult and child interests in the various educational activities and programs of the Brandywine School District (Michigan), the study has implications for improvement of the Brandywine adult education program. A lack of participation in the adult education program has created a need for revision. A questionnaire…
ERIC Educational Resources Information Center
Bandele, Samuel Oye; Adekunle, Adeyemi Suraju
2015-01-01
The study was conducted to design, develop and test a c++ application program CAP-QUAD for solving quadratic equation in elementary school in Nigeria. The package was developed in c++ using object-oriented programming language, other computer program that were also utilized during the development process is DevC++ compiler, it was used for…
On program restructuring, scheduling, and communication for parallel processor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polychronopoulos, Constantine D.
1986-08-01
This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less
NASA Technical Reports Server (NTRS)
Agrawal, Gagan; Sussman, Alan; Saltz, Joel
1993-01-01
Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.
OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation
NASA Astrophysics Data System (ADS)
Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun
2017-11-01
We present Open Multi-Processing (OpenMP) version of Fortran 90 programs for solving the Gross-Pitaevskii (GP) equation for a Bose-Einstein condensate in one, two, and three spatial dimensions, optimized for use with GNU and Intel compilers. We use the split-step Crank-Nicolson algorithm for imaginary- and real-time propagation, which enables efficient calculation of stationary and non-stationary solutions, respectively. The present OpenMP programs are designed for computers with multi-core processors and optimized for compiling with both commercially-licensed Intel Fortran and popular free open-source GNU Fortran compiler. The programs are easy to use and are elaborated with helpful comments for the users. All input parameters are listed at the beginning of each program. Different output files provide physical quantities such as energy, chemical potential, root-mean-square sizes, densities, etc. We also present speedup test results for new versions of the programs. Program files doi:http://dx.doi.org/10.17632/y8zk3jgn84.2 Licensing provisions: Apache License 2.0 Programming language: OpenMP GNU and Intel Fortran 90. Computer: Any multi-core personal computer or workstation with the appropriate OpenMP-capable Fortran compiler installed. Number of processors used: All available CPU cores on the executing computer. Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1888; ibid.204 (2016) 209. Does the new version supersede the previous version?: Not completely. It does supersede previous Fortran programs from both references above, but not OpenMP C programs from Comput. Phys. Commun. 204 (2016) 209. Nature of problem: The present Open Multi-Processing (OpenMP) Fortran programs, optimized for use with commercially-licensed Intel Fortran and free open-source GNU Fortran compilers, solve the time-dependent nonlinear partial differential (GP) equation for a trapped Bose-Einstein condensate in one (1d), two (2d), and three (3d) spatial dimensions for six different trap symmetries: axially and radially symmetric traps in 3d, circularly symmetric traps in 2d, fully isotropic (spherically symmetric) and fully anisotropic traps in 2d and 3d, as well as 1d traps, where no spatial symmetry is considered. Solution method: We employ the split-step Crank-Nicolson algorithm to discretize the time-dependent GP equation in space and time. The discretized equation is then solved by imaginary- or real-time propagation, employing adequately small space and time steps, to yield the solution of stationary and non-stationary problems, respectively. Reasons for the new version: Previously published Fortran programs [1,2] have now become popular tools [3] for solving the GP equation. These programs have been translated to the C programming language [4] and later extended to the more complex scenario of dipolar atoms [5]. Now virtually all computers have multi-core processors and some have motherboards with more than one physical computer processing unit (CPU), which may increase the number of available CPU cores on a single computer to several tens. The C programs have been adopted to be very fast on such multi-core modern computers using general-purpose graphic processing units (GPGPU) with Nvidia CUDA and computer clusters using Message Passing Interface (MPI) [6]. Nevertheless, previously developed Fortran programs are also commonly used for scientific computation and most of them use a single CPU core at a time in modern multi-core laptops, desktops, and workstations. Unless the Fortran programs are made aware and capable of making efficient use of the available CPU cores, the solution of even a realistic dynamical 1d problem, not to mention the more complicated 2d and 3d problems, could be time consuming using the Fortran programs. Previously, we published auto-parallel Fortran programs [2] suitable for Intel (but not GNU) compiler for solving the GP equation. Hence, a need for the full OpenMP version of the Fortran programs to reduce the execution time cannot be overemphasized. To address this issue, we provide here such OpenMP Fortran programs, optimized for both Intel and GNU Fortran compilers and capable of using all available CPU cores, which can significantly reduce the execution time. Summary of revisions: Previous Fortran programs [1] for solving the time-dependent GP equation in 1d, 2d, and 3d with different trap symmetries have been parallelized using the OpenMP interface to reduce the execution time on multi-core processors. There are six different trap symmetries considered, resulting in six programs for imaginary-time propagation and six for real-time propagation, totaling to 12 programs included in BEC-GP-OMP-FOR software package. All input data (number of atoms, scattering length, harmonic oscillator trap length, trap anisotropy, etc.) are conveniently placed at the beginning of each program, as before [2]. Present programs introduce a new input parameter, which is designated by Number_of_Threads and defines the number of CPU cores of the processor to be used in the calculation. If one sets the value 0 for this parameter, all available CPU cores will be used. For the most efficient calculation it is advisable to leave one CPU core unused for the background system's jobs. For example, on a machine with 20 CPU cores such that we used for testing, it is advisable to use up to 19 CPU cores. However, the total number of used CPU cores can be divided into more than one job. For instance, one can run three simulations simultaneously using 10, 4, and 5 CPU cores, respectively, thus totaling to 19 used CPU cores on a 20-core computer. The Fortran source programs are located in the directory src, and can be compiled by the make command using the makefile in the root directory BEC-GP-OMP-FOR of the software package. The examples of produced output files can be found in the directory output, although some large density files are omitted, to save space. The programs calculate the values of actually used dimensionless nonlinearities from the physical input parameters, where the input parameters correspond to the identical nonlinearity values as in the previously published programs [1], so that the output files of the old and new programs can be directly compared. The output files are conveniently named such that their contents can be easily identified, following the naming convention introduced in Ref. [2]. For example, a file named -out.txt, where is a name of the individual program, represents the general output file containing input data, time and space steps, nonlinearity, energy and chemical potential, and was named fort.7 in the old Fortran version of programs [1]. A file named -den.txt is the output file with the condensate density, which had the names fort.3 and fort.4 in the old Fortran version [1] for imaginary- and real-time propagation programs, respectively. Other possible density outputs, such as the initial density, are commented out in the programs to have a simpler set of output files, but users can uncomment and re-enable them, if needed. In addition, there are output files for reduced (integrated) 1d and 2d densities for different programs. In the real-time programs there is also an output file reporting the dynamics of evolution of root-mean-square sizes after a perturbation is introduced. The supplied real-time programs solve the stationary GP equation, and then calculate the dynamics. As the imaginary-time programs are more accurate than the real-time programs for the solution of a stationary problem, one can first solve the stationary problem using the imaginary-time programs, adapt the real-time programs to read the pre-calculated wave function and then study the dynamics. In that case the parameter NSTP in the real-time programs should be set to zero and the space mesh and nonlinearity parameters should be identical in both programs. The reader is advised to consult our previous publication where a complete description of the output files is given [2]. A readme.txt file, included in the root directory, explains the procedure to compile and run the programs. We tested our programs on a workstation with two 10-core Intel Xeon E5-2650 v3 CPUs. The parameters used for testing are given in sample input files, provided in the corresponding directory together with the programs. In Table 1 we present wall-clock execution times for runs on 1, 6, and 19 CPU cores for programs compiled using Intel and GNU Fortran compilers. The corresponding columns "Intel speedup" and "GNU speedup" give the ratio of wall-clock execution times of runs on 1 and 19 CPU cores, and denote the actual measured speedup for 19 CPU cores. In all cases and for all numbers of CPU cores, although the GNU Fortran compiler gives excellent results, the Intel Fortran compiler turns out to be slightly faster. Note that during these tests we always ran only a single simulation on a workstation at a time, to avoid any possible interference issues. Therefore, the obtained wall-clock times are more reliable than the ones that could be measured with two or more jobs running simultaneously. We also studied the speedup of the programs as a function of the number of CPU cores used. The performance of the Intel and GNU Fortran compilers is illustrated in Fig. 1, where we plot the speedup and actual wall-clock times as functions of the number of CPU cores for 2d and 3d programs. We see that the speedup increases monotonically with the number of CPU cores in all cases and has large values (between 10 and 14 for 3d programs) for the maximal number of cores. This fully justifies the development of OpenMP programs, which enable much faster and more efficient solving of the GP equation. However, a slow saturation in the speedup with the further increase in the number of CPU cores is observed in all cases, as expected. The speedup tends to increase for programs in higher dimensions, as they become more complex and have to process more data. This is why the speedups of the supplied 2d and 3d programs are larger than those of 1d programs. Also, for a single program the speedup increases with the size of the spatial grid, i.e., with the number of spatial discretization points, since this increases the amount of calculations performed by the program. To demonstrate this, we tested the supplied real2d-th program and varied the number of spatial discretization points NX=NY from 20 to 1000. The measured speedup obtained when running this program on 19 CPU cores as a function of the number of discretization points is shown in Fig. 2. The speedup first increases rapidly with the number of discretization points and eventually saturates. Additional comments: Example inputs provided with the programs take less than 30 minutes to run on a workstation with two Intel Xeon E5-2650 v3 processors (2 QPI links, 10 CPU cores, 25 MB cache, 2.3 GHz).
Retargeting of existing FORTRAN program and development of parallel compilers
NASA Technical Reports Server (NTRS)
Agrawal, Dharma P.
1988-01-01
The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.
1987-05-06
Rational . Rational Environment A_9_5_2. Rational Arthitecture (R1000 Model 200) 6. PERFORMING ORG. REPORT...validation testing performed on the Rational Environment , A_9_5_2, using Version 1.8 of the Ada0 Compiler Validation Capability (ACVC). The Rational ... Environment is hosted on a Rational Architecture (R1000 Model 200) operating under Rational Environment , Release A 95 2. Programs processed by this
Accelerating semantic graph databases on commodity clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Castellana, Vito G.; Haglin, David J.
We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.
Energygrams: Brief descriptions of energy technology
NASA Astrophysics Data System (ADS)
Simpson, W. F., Jr.
This compilation of technical notes (called Energygrams) is published by the Technical Information Center. Energygrams are usually one-page illustrated bulletins describing DOE technology or data and telling how to obtain the technical reports or other material on which they are based. Frequently a personal contact is given who can provide program information in addition to the data found in the reports. The compilation is organized by subject categories, and, within each category, Energygrams are presented alphabetically by Energygram title.
Rapid Prototyping of Application Specific Signal Processors (RASSP)
1993-12-23
Compilers 2-9 - Cadre Teamwork 2-13 - CodeCenter (Centerline) 2-15 - dbx/dbxtool (UNIXm) 2-17 - Falcon (Mentor) 2-19 - FrameMaker (Frame Tech) 2-21 - gprof...UNIXm C debuggers Falcon Mentor ECAD Framework FrameMaker Frame Tech Word Processing gcc GNU CIC++ compiler gprof GNU Software profiling tool...organization can put their own documentation on-line using the BOLD Com- poser for Framemaker . " The AMPLE programming language is a C like language used for
Cables and connectors: A compilation
NASA Technical Reports Server (NTRS)
1974-01-01
A compilation is presented that reflects the uses, adaptation, and maintenance plus service, that are innovations derived from problem solutions in the space R and D programs, both in house and by NASA and AEC contractors. Data cover: (1) technology revelant to the employment of flat conductor cables and their adaptation to and within conventional systems, (2) connectors and various adaptations, and (3) maintenance and service technology, and shop hints useful in the installation and care of cables and connectors.
Neuro-Linguistic Programming: The New Eclectic Therapy.
ERIC Educational Resources Information Center
Betts, Nicoletta C.
Richard Bandler and John Grinder developed neuro-linguisitc programming (NLP) after observing "the magical skills of potent psychotherapists" Frederick Perls, Virginia Satir, and Milton Erikson. They compiled the most effective techniques for building rapport, gathering data, and influencing change in psychotherapy, offering them only as…
BIBLIOGRAPHY OF TRAINING AIDS.
ERIC Educational Resources Information Center
MCKEONE, CHARLES J.
THIS COMPILATION OF INSTRUCTIONAL AIDS FOR USE IN AIR-CONDITIONING AND REFRIGERATION TRAINING PROGRAMS CONTAINS LISTS OF VISUAL AND AUDIOVISUAL TRAINING AIDS AND GUEST LECTURERS AVAILABLE FROM MEMBER COMPANIES OF THE AIR-CONDITIONING AND REFRIGERATION INSTITUTE AS AN INDUSTRY SERVICE TO SCHOOL OFFICIALS INTERESTED IN CONDUCTING SUCH PROGRAMS. THE…
The Application of Science and Technology to Public Programs.
ERIC Educational Resources Information Center
Feller, Irwin
Conference papers, recommendations, and discussion are compiled, focusing on the complex of problems associated with rapidly expanding urbanization and consequent rural dislocation. Topics exploring the problems included: air and water pollution; program planning and management; solid waste disposal; transportation; housing; crime control; health…
Guide to Financial Aid for American Indian Students.
ERIC Educational Resources Information Center
Thurber, Hanna J., Ed.; Thomason, Timothy C., Ed.
This directory compiles information on college financial aid for American Indian and Alaska Native students. Information is provided on approximately 175 programs exclusively for American Indian and Alaska Native students, including private scholarships and fellowships, school-specific programs and scholarships, state financial aid, tribal…
Student Rights and Discipline: Policies, Programs, and Procedures.
ERIC Educational Resources Information Center
Moody, Charles D., Ed.; And Others
This compilation of papers from Program for Educational Opportunity conferences incorporates theoretical, empirical, legal and programmatic perspectives pertinent to the regulation of student behavior in the desegregated setting. The dual challenge of protecting students' rights and teaching socially responsibile behavior is explored. The legal…
Institutions Offering Graduate Training in School Psychology: 1973-1974
ERIC Educational Resources Information Center
Bardon, Jack I.; Wenger, Ralph D.
1974-01-01
This compilation of graduate programs in school psychology from 180 institutions in U.S. and Canada includes: (1) names and address of institution; (2) responsible administrative unit; (3) degree(s) conferred; (4) type and quantity of financial assistance; and (5) program emphasis. (HMV)
A Comparison of State-Funded Pre-K Programs: Lessons for Indiana
ERIC Educational Resources Information Center
Chesnut, Colleen; Mosier, Gina; Sugimoto, Thomas; Ruddy, Anne-Maree
2017-01-01
In order to inform the Indiana State Board of Education's decision-making on Indiana's On My Way Pre-K Pilot program, researchers at the Center for Evaluation and Education Policy (CEEP) at Indiana University compiled existing data on ten states that have implemented pilot pre-Kindergarten (pre-K) programs and subsequently expanded these programs…
Federal Programs for the Retarded: A Review and Evaluation. Report to the President.
ERIC Educational Resources Information Center
President's Committee on Mental Retardation, Washington, DC.
Reports from 22 federal departments and agencies on their programs related to mental retardation have been compiled for submission to the President's Committee on Mental Retardation for review, analysis, and subsequent action. For each report, the overall mission is given, as well as unit identification, external programs (services or activities),…
ERIC Educational Resources Information Center
Milchus, Norman J.
The Wayne County Pre-Reading Program for Preventing Reading Failure is an individually, diagnostically prescribed, perceptual-cognitive-linguistic development program. The program utilizes the largest compilation of prescriptively coded, reading readiness materials to be assigned prior to and concurrent with first-year reading instruction. The…
Operations analysis (study 2.1). Program listing for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1974-01-01
A listing of the LOVES computer program is presented. The program is coded partially in SIMSCRIPT and FORTRAN. This version of LOVES is compatible with both the CDC 7600 and the UNIVAC 1108 computers. The code has been compiled, loaded, and executed successfully on the EXEC 8 system for the UNIVAC 1108.
A data collection and processing procedure for evaluating a research program
Giuseppe Rensi; H. Dean Claxton
1972-01-01
A set of computer programs compiled for the information processing requirements of a model for evaluating research proposals are described. The programs serve to assemble and store information, periodically update it, and convert it to a form usable for decision-making. Guides for collecting and coding data are explained. The data-processing options available and...
Multiprocessor programming environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M.B.; Fornaro, R.
Programming tools and techniques have been well developed for traditional uniprocessor computer systems. The focus of this research project is on the development of a programming environment for a high speed real time heterogeneous multiprocessor system, with special emphasis on languages and compilers. The new tools and techniques will allow a smooth transition for programmers with experience only on single processor systems.
The Augusta College Humanities Program: Strengthening an Introductory Three-Course Sequence.
ERIC Educational Resources Information Center
American Association of State Colleges and Universities, Washington, DC.
Presented is a compilation of materials concerning the Augusta College Humanities Program in Augusta, Georgia, beginning with a brief description of the program and its background. In 1984, the college began a 2.5-year project to revitalize and strengthen its required sophomore level three course humanities sequence (Greece and Rome, the Middle…
Adult Basic and Secondary Education Program Statistics. Fiscal Year 1976.
ERIC Educational Resources Information Center
Cain, Sylvester H.; Whalen, Barbara A.
Reports submitted to the National Center for Education Statistics provided data for this compilation and tabulation of data on adult participants in U.S. educational programs in fiscal year 1976. In the summary section introducing the charts, it is noted that adult education programs funded under P.L. 91-230 served over 1.6 million persons--an…
Lean and Efficient Software: Whole-Program Optimization of Executables
2015-09-30
libraries. Many levels of library interfaces—where some libraries are dynamically linked and some are provided in binary form only—significantly limit...software at build time. The opportunity: Our objective in this project is to substantially improve the performance, size, and robustness of binary ...executables by using static and dynamic binary program analysis techniques to perform whole-program optimization directly on compiled programs
The PR2D (Place, Route in 2-Dimensions) automatic layout computer program handbook
NASA Technical Reports Server (NTRS)
Edge, T. M.
1978-01-01
Place, Route in 2-Dimensions is a standard cell automatic layout computer program for generating large scale integrated/metal oxide semiconductor arrays. The program was utilized successfully for a number of years in both government and private sectors but until now was undocumented. The compilation, loading, and execution of the program on a Sigma V CP-V operating system is described.
The Nippon Foundation / GEBCO Indian Ocean Bathymetric Compilation Project
NASA Astrophysics Data System (ADS)
Wigley, R. A.; Hassan, N.; Chowdhury, M. Z.; Ranaweera, R.; Sy, X. L.; Runghen, H.; Arndt, J. E.
2014-12-01
The Indian Ocean Bathymetric Compilation (IOBC) project, undertaken by Nippon Foundation / GEBCO Scholars, is focused on building a regional bathymetric data compilation, of all publically-available bathymetric data within the Indian Ocean region from 30°N to 60° S and 10° to 147° E. One of the objectives of this project is the creation of a network of Nippon Foundation / GEBCO Scholars working together, derived from the thirty Scholars from fourteen nations bordering on the Indian Ocean who have graduated from this Postgraduate Certificate in Ocean Bathymetry (PCOB) training program training program at the University of New Hampshire. The IOBC project has provided students a working example during their course work and has been used as basis for student projects during their visits to another Laboratory at the end of their academic year. This multi-national, multi-disciplinary project team will continue to build on the skills gained during the PCOB program through additional training. The IOBC is being built using the methodology developed for the International Bathymetric Chart of the Southern Ocean (IBCSO) compilation (Arndt et al., 2013). This skill was transferred, through training workshops, to further support the ongoing development within the scholar's network. This capacity-building project is envisioned to connect other personnel from within all of the participating nations and organizations, resulting in additional capacity-building in this field of multi-resolution bathymetric grid generation in their home communities. An updated regional bathymetric map and grids of the Indian Ocean will be an invaluable tool for all fields of marine scientific research and resource management. In addition, it has implications for increased public safety by offering the best and most up-to-date depth data for modeling regional-scale oceanographic processes such as tsunami-wave propagation behavior amongst others.
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
NASA Technical Reports Server (NTRS)
Saltsman, J. F.
1994-01-01
TS-SRP/PACK is a set of computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of the Strainrange Partitioning (TS-SRP). The user should be thoroughly familiar with the TS-SRP method before attempting to use any of these programs. The document for this program includes a theory manual as well as a detailed user's manual with a tutorial to guide the user in the proper use of TS-SRP. An extensive database has also been developed in a parallel effort. This database is an excellent source of high-temperature, creep-fatigue test data and can be used with other life-prediction methods as well. Five programs are included in TS-SRP/PACK along with the alloy database. The TABLE program is used to print the datasets, which are in NAMELIST format, in a reader friendly format. INDATA is used to create new datasets or add to existing ones. The FAIL program is used to characterize the failure behavior of an alloy as given by the constants in the strainrange-life relations used by the total strain version of SRP (TS-SRP) and the inelastic strainrange-based version of SRP. The program FLOW is used to characterize the flow behavior (the constitutive response) of an alloy as given by the constants in the flow equations used by TS-SRP. Finally, LIFE is used to predict the life of a specified cycle, using the constants characterizing failure and flow behavior determined by FAIL and FLOW. LIFE is written in interpretive BASIC to avoid compiling and linking every time the equation constants are changed. Four out of five programs in this package are written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS and are designed to read data using the NAMELIST format statement. The fifth is written in BASIC version 3.0 for IBM PC series and compatible computers running MS-DOS version 3.10. The executables require at least 239K of memory and DOS 3.1 or higher. To compile the source, a Lahey FORTRAN compiler is required. Source code modifications will be necessary if the compiler to be used does not support NAMELIST input. Probably the easiest revision to make is to use a list-directed READ statement. The standard distribution medium for this program is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. TS-SRP/PACK was developed in 1992.
Measurement of community empowerment in three community programs in Rapla (Estonia).
Kasmel, Anu; Andersen, Pernille Tanggaard
2011-03-01
Community empowerment approaches have been proven to be powerful tools for solving local health problems. However, the methods for measuring empowerment in the community remain unclear and open to dispute. This study aims to describe how a context-specific community empowerment measurement tool was developed and changes made to three health promotion programs in Rapla, Estonia. An empowerment expansion model was compiled and applied to three existing programs: Safe Community, Drug/HIV Prevention and Elderly Quality of Life. The consensus workshop method was used to create the measurement tool and collect data on the Organizational Domains of Community Empowerment (ODCE). The study demonstrated considerable increases in the ODCE among the community workgroup, which was initiated by community members and the municipality's decision-makers. The increase was within the workgroup, which had strong political and financial support on a national level but was not the community's priority. The program was initiated and implemented by the local community members, and continuous development still occurred, though at a reduced pace. The use of the empowerment expansion model has proven to be an applicable, relevant, simple and inexpensive tool for the evaluation of community empowerment.
Measurement of Community Empowerment in Three Community Programs in Rapla (Estonia)
Kasmel, Anu; Andersen, Pernille Tanggaard
2011-01-01
Community empowerment approaches have been proven to be powerful tools for solving local health problems. However, the methods for measuring empowerment in the community remain unclear and open to dispute. This study aims to describe how a context-specific community empowerment measurement tool was developed and changes made to three health promotion programs in Rapla, Estonia. An empowerment expansion model was compiled and applied to three existing programs: Safe Community, Drug/HIV Prevention and Elderly Quality of Life. The consensus workshop method was used to create the measurement tool and collect data on the Organizational Domains of Community Empowerment (ODCE). The study demonstrated considerable increases in the ODCE among the community workgroup, which was initiated by community members and the municipality’s decision-makers. The increase was within the workgroup, which had strong political and financial support on a national level but was not the community’s priority. The program was initiated and implemented by the local community members, and continuous development still occurred, though at a reduced pace. The use of the empowerment expansion model has proven to be an applicable, relevant, simple and inexpensive tool for the evaluation of community empowerment. PMID:21556179
Improving robustness and computational efficiency using modern C++
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paterno, M.; Kowalkowski, J.; Green, C.
2014-01-01
For nearly two decades, the C++ programming language has been the dominant programming language for experimental HEP. The publication of ISO/IEC 14882:2011, the current version of the international standard for the C++ programming language, makes available a variety of language and library facilities for improving the robustness, expressiveness, and computational efficiency of C++ code. However, much of the C++ written by the experimental HEP community does not take advantage of the features of the language to obtain these benefits, either due to lack of familiarity with these features or concern that these features must somehow be computationally inefficient. In thismore » paper, we address some of the features of modern C+-+, and show how they can be used to make programs that are both robust and computationally efficient. We compare and contrast simple yet realistic examples of some common implementation patterns in C, currently-typical C++, and modern C++, and show (when necessary, down to the level of generated assembly language code) the quality of the executable code produced by recent C++ compilers, with the aim of allowing the HEP community to make informed decisions on the costs and benefits of the use of modern C++.« less
Results of Ponseti Brasil Program: Multicentric Study in 1621 Feet: Preliminary Results.
Nogueira, Monica P; Queiroz, Ana C D B F; Melanda, Alessandro G; Tedesco, Ana P; Brandão, Antonio L G; Beling, Claudio; Violante, Francisco H; Brandão, Gilberto F; Ferreira, Laura F A; Brambila, Leandro S; Leite, Leopoldina M; Zabeu, Jose L; Kim, Jung H; Fernandes, Kalyana E; Arima, Marcia A S; Aguilar, Maria D P Q; Farias Filho, Orlando C D; Oliveira Filho, Oscar B D A; Pinho, Solange D S; Moulin, Paulo; Volpi, Reinaldo; Fox, Mark; Greenwald, Miles F; Lyle, Brandon; Morcuende, Jose A
The Ponseti method has been shown to be the most effective treatment for congenital clubfoot. The current challenge is to establish sustainable national clubfoot treatment programs that utilize the Ponseti method and integrate it within a nation's governmental health system. The Brazilian Ponseti Program (Programa Ponseti Brasil) has increased awareness of the utility of the Ponseti method and has trained >500 Brazilian orthopaedic surgeons in it. A group of 18 of those surgeons had been able to reproduce the Ponseti clubfoot treatment, and compiled their initial results through structured spreadsheet. The study compiled 1040 patients for a total of 1621 feet. The average follow-up time was 2.3 years with an average correction time of approximately 3 months. Patients required an average of 6.40 casts to achieve correction. This study demonstrates that good initial correction rates are reproducible after training; from 1040 patients only 1.4% required a posteromedial release. Level IV.
National briefing summaries: Nuclear fuel cycle and waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, K.J.; Bradley, D.J.; Fletcher, J.F.
Since 1976, the International Program Support Office (IPSO) at the Pacific Northwest Laboratory (PNL) has collected and compiled publicly available information concerning foreign and international radioactive waste management programs. This National Briefing Summaries is a printout of an electronic database that has been compiled and is maintained by the IPSO staff. The database contains current information concerning the radioactive waste management programs (with supporting information on nuclear power and the nuclear fuel cycle) of most of the nations (except eastern European countries) that now have or are contemplating nuclear power, and of the multinational agencies that are active in radioactivemore » waste management. Information in this document is included for three additional countries (China, Mexico, and USSR) compared to the prior issue. The database and this document were developed in response to needs of the US Department of Energy.« less
National briefing summaries: Nuclear fuel cycle and waste management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, K.J.; Lakey, L.T.; Silviera, D.J.
The National Briefing Summaries is a compilation of publicly available information concerning the nuclear fuel cycle and radioactive waste management strategies and programs of 21 nations, including the United States and three international agencies that have publicized their activities in this field. It presents available highlight information with references that may be used by the reader for additional information. The information in this document is compiled primarily for use by the US Department of Energy and other US federal agencies and their contractors to provide summary information on radioactive waste management activities in other countries. This document provides an awarenessmore » to managers and technical staff of what is occurring in other countries with regard to strategies, activities, and facilities. The information may be useful in program planning to improve and benefit United States' programs through foreign information exchange. Benefits to foreign exchange may be derived through a number of exchange activities.« less
BEEC: An event generator for simulating the Bc meson production at an e+e- collider
NASA Astrophysics Data System (ADS)
Yang, Zhi; Wu, Xing-Gang; Wang, Xian-You
2013-12-01
The Bc meson is a doubly heavy quark-antiquark bound state and carries flavors explicitly, which provides a fruitful laboratory for testing potential models and understanding the weak decay mechanisms for heavy flavors. In view of the prospects in Bc physics at the hadronic colliders such as Tevatron and LHC, Bc physics is attracting more and more attention. It has been shown that a high luminosity e+e- collider running around the Z0-peak is also helpful for studying the properties of Bc meson and has its own advantages. For this purpose, we write down an event generator for simulating Bc meson production through e+e- annihilation according to relevant publications. We name it BEEC, in which the color-singlet S-wave and P-wave (cb¯)-quarkonium states together with the color-octet S-wave (cb¯)-quarkonium states can be generated. BEEC can also be adopted to generate the similar charmonium and bottomonium states via the semi-exclusive channels e++e-→|(QQ¯)[n]>+Q+Q¯ with Q=b and c respectively. To increase the simulation efficiency, we simplify the amplitude as compact as possible by using the improved trace technology. BEEC is a Fortran program written in a PYTHIA-compatible format and is written in a modular structure, one may apply it to various situations or experimental environments conveniently by using the GNU C compiler make. A method to improve the efficiency of generating unweighted events within PYTHIA environment is proposed. Moreover, BEEC will generate a standard Les Houches Event data file that contains useful information of the meson and its accompanying partons, which can be conveniently imported into PYTHIA to do further hadronization and decay simulation. Catalogue identifier: AEQC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQC_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 114868 No. of bytes in distributed program, including test data, etc.: 963939 Distribution format: tar.gz Programming language: FORTRAN 77/90. Computer: Any computer with Fortran compiler, the program is tested with GNU Fortran compiler and Intel Fortran compiler. Operating system: UNIX, Linux and Windows. RAM: About 2.0 MB. Classification: 11.2. Nature of problem: Production of charmonium, (cb¯)-quarkonium and bottomonium via e+e- annihilation channel around the Z0 peak. Solution method: The production of heavy (QQ)-quarkonium (Q,Q‧=b,c) via e+e- annihilation are estimated by using the improved trace technology. The (QQ)-quarkonium in color-singlet 1S-wave state, 1P-wave state, and the color-octet 1S-wave states have been studied within the framework of non-relativistic QCD. The code with option can generate weighted and unweighted events conveniently, in particular, the unweighted events are generated by using an improved hit-and-miss approach so as to improve the generating efficiency. Restrictions: The generator is aimed at the production of double heavy quarkonium through e+e- annihilation at the Z0 peak. The considered processes are those that are associated with two heavy quark jets, which could provide sizable quarkonium events around the Z0 peak. Running time: It depends on which option one chooses to match PYTHIA when generating the heavy quarkonium events. Typically, for the production of the S-wave quarkonium states, if setting IDPP=2 (unweighted events), then it takes about 2 h on a 2.9 GHz AMD Athlon (tm) II×4 635 Processor machine to generate 105 events; if setting IDPP=3 (weighted events), it takes only ˜16 min to generate 105 events. For the production of the P-wave quarkonium states, the time will be almost one hundred times longer than the case of the S-wave quarkonium.
Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations
NASA Astrophysics Data System (ADS)
Unekis, Michael J.; Rice, Betsy M.
1994-12-01
We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.
Compiler analysis for irregular problems in FORTRAN D
NASA Technical Reports Server (NTRS)
Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel
1992-01-01
We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.
Greninger, Mark L.; Klemperer, Simon L.; Nokleberg, Warren J.
1999-01-01
The accompanying directory structure contains a Geographic Information Systems (GIS) compilation of geophysical, geological, and tectonic data for the Circum-North Pacific. This area includes the Russian Far East, Alaska, the Canadian Cordillera, linking continental shelves, and adjacent oceans. This GIS compilation extends from 120?E to 115?W, and from 40?N to 80?N. This area encompasses: (1) to the south, the modern Pacific plate boundary of the Japan-Kuril and Aleutian subduction zones, the Queen Charlotte transform fault, and the Cascadia subduction zone; (2) to the north, the continent-ocean transition from the Eurasian and North American continents to the Arctic Ocean; (3) to the west, the diffuse Eurasian-North American plate boundary, including the probable Okhotsk plate; and (4) to the east, the Alaskan-Canadian Cordilleran fold belt. This compilation should be useful for: (1) studying the Mesozoic and Cenozoic collisional and accretionary tectonics that assembled this continental crust of this region; (2) studying the neotectonics of active and passive plate margins in this region; and (3) constructing and interpreting geophysical, geologic, and tectonic models of the region. Geographic Information Systems (GIS) programs provide powerful tools for managing and analyzing spatial databases. Geological applications include regional tectonics, geophysics, mineral and petroleum exploration, resource management, and land-use planning. This CD-ROM contains thematic layers of spatial data-sets for geology, gravity field, magnetic field, oceanic plates, overlap assemblages, seismology (earthquakes), tectonostratigraphic terranes, topography, and volcanoes. The GIS compilation can be viewed, manipulated, and plotted with commercial software (ArcView and ArcInfo) or through a freeware program (ArcExplorer) that can be downloaded from http://www.esri.com for both Unix and Windows computers using the button below.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornung, Richard D.; Hones, Holger E.
The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. Theremore » are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.« less
These reviews and evaluations compiled by Pecos Management Services, Inc. encompass the current and future WIPP activities in the program areas of TRU waste characterization, transportation, and disposal.
Partial compilation and revision of basic data in the WATEQ programs
Nordstrom, D. Kirk; Valentine, S.D.; Ball, J.W.; Plummer, Niel; Jones, B.F.
1984-01-01
Several portions of the basic data in the WATEQ series of computer programs (WATEQ, WATEQF, WATEQ2, WATEQ3, and PHREEQE) are compiled. The density and dielectric constant of water and their temperature dependence are evaluated for the purpose of updating the Debye-Huckel solvent parameters in the activity coefficient equations. The standard state thermodynamic properties of the Fe2+ and Fe3+ aqueous ions are refined. The main portion of this report is a comprehensive listing of aluminum hydrolysis constants, aluminum fluoride, aluminum sulfate, calcium chloride, magnesium chloride, potassium sulfate and sodium sulfate stability constants, solubility product constants for gibbsite and amorphous aluminum hydroxide, and the standard electrode potentials for Fe (s)/Fe2+(aq) and Fe2 +(aq)/Fe3+(aq). (USGS)
1994-03-25
Technology Building 225, Room A266 Gait•--eburg, Maryland 20899 U.S.A. Ada Von Ogan~ztionAda Jointt Program Office De & Software David R . Basel...Standards and Technology Building 225, Room A266 Gaithersburg, Maryland 20899 U.S.A. azi Ada Joint Program office Directoz’,’Coputer & Softvare David R ...characters, a bar (" r ) is written in the 16th position and the rest of the characters ame not prined. "* The place of the definition, i.e.. a line
Focus on Efficient Management.
ERIC Educational Resources Information Center
Kentucky State Dept. of Education, Frankfort. Office of Resource Management.
Compiled as a workshop handbook, this guide presents information to help food service program administrators comply with federal regulations and evaluate and upgrade their operations. Part I discusses requirements of the National School Lunch Program, focusing on the "offer versus serve" method of service enacted in 1976 to reduce waste.…
The report is a DCIC compilation of current R and D programs that are supported by NASA, ARPA, AEC, NBS, Bureau of Mines, and National Science Foundation in the field of ceramics and related materials. (Author)
Competency-Based Adult High School Curriculum Project.
ERIC Educational Resources Information Center
Singer, Elizabeth
This compilation of program materials serves as an introduction to and overview of Florida's Brevard Community College's (BCC's) Competency-Based Adult High School Completion Project, which was conducted to teach administrators, counselors, and teachers how to organize and implement a competency-based adult education (CBAE) program; to critique…
Competency-Based Adult Education: Florida Model.
ERIC Educational Resources Information Center
Singer, Elizabeth
This compilation of program materials serves as an introduction to Florida's Brevard Community College's (BCC's) Competency-Based Adult High School Completion Project, a multi-year project designed to teach adult administrators, counselors, and teachers how to organize and implement a competency-based adult education (CBAE) program; to critique…
NASA Technical Reports Server (NTRS)
Kole, R. E.; Helmers, P. H.; Hotz, R. L.
1974-01-01
This is a reference document to be used in the process of getting HAL/S programs compiled and debugged on the IBM 360 computer. Topics from the operating system communication to interpretation of debugging aids are discussed. Features of HAL programming system that have specific system/360 dependencies are presented.
Building Program Models Incrementally from Informal Descriptions.
1979-10-01
specified at each step. Since the user controls the interaction, the user may determine the order in which information flows into PMB. Information is received...until only ten years ago the term aautomatic programming" referred to the development of the assemblers, macro expanders, and compilers for these
A Computerised English Language Proofing Cloze Program.
ERIC Educational Resources Information Center
Coniam, David
1997-01-01
Describes a computer program that takes multiple-choice cloze passages and compiles them into proofreading exercises. Results reveal that such a computerized test type can be used to accurately measure the proficiency of students of English as a Second Language in Hong Kong. (14 references) (Author/CK)
POLLUTION PREVENTION CAST STUDIES COMPENDIUM - 2ND EDITION
This compendium summarizes a compilation of case studies in the area of pollution prevention. he compendium is divided into 3 sections, featuring 3 of the Pollution Prevention Branch's key programs. An overview of each program is provided at the beginning of each section of the c...
Materials for Secondary School Programs for the Educable Mentally Retarded Adolescent.
ERIC Educational Resources Information Center
Boston Univ., MA. New England Special Education Instructional Materials Center.
Compiled are materials related to work study programs for the educable mentally handicapped adolescent. Items listed include professional books, textbooks, resource aids, journals and articles, curriculum guides, instructional materials, and audiovisual aids. The materials are grouped according to academic areas (mathematics, science, social…
Minnesota Department of Education Agricultural Education Program Descriptions 01.0000-01.9095
ERIC Educational Resources Information Center
Minnesota Department of Education, 2004
2004-01-01
This document provides a brief compilation of descriptions of agricultural education programs linked to Career and Technical Education (CTE) initiative in Minnesota. Agriculture Exploration courses focus on the animal sciences, plant sciences, natural resource sciences, agricultural business and marketing, and leadership development. Agribusiness…
42 CFR 413.75 - Direct GME payments: General requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
...-based providers for the costs of approved residency programs in medicine, osteopathy, dentistry, and... Council for Graduate Medical Education (ACGME) as a fellowship program in geriatric medicine. (4) Is a... Urban Consumers as compiled by the Bureau of Labor Statistics. Emergency Medicare GME affiliated group...
42 CFR 413.75 - Direct GME payments: General requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
...-based providers for the costs of approved residency programs in medicine, osteopathy, dentistry, and... Council for Graduate Medical Education (ACGME) as a fellowship program in geriatric medicine. (4) Is a... Urban Consumers as compiled by the Bureau of Labor Statistics. Emergency Medicare GME affiliated group...
42 CFR 413.75 - Direct GME payments: General requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-based providers for the costs of approved residency programs in medicine, osteopathy, dentistry, and... Council for Graduate Medical Education (ACGME) as a fellowship program in geriatric medicine. (4) Is a... Urban Consumers as compiled by the Bureau of Labor Statistics. Emergency Medicare GME affiliated group...
ERIC Educational Resources Information Center
Eber, Ronald
This handbook has been compiled to aid concerned individuals and ecology groups more adequately define their goals, initiate good programs, and take effective action. It examines the ways a group of working individuals can become involved in action programs for ecological change. Part 1 deals with organization, preliminary organizing, structuring,…
Moreno, Eliana M; Moriana, Juan Antonio
2016-08-09
There is now broad consensus regarding the importance of involving users in the process of implementing guidelines. Few studies, however, have addressed this issue, let alone the implementation of guidelines for common mental health disorders. The aim of this study is to compile and describe implementation strategies and resources related to common clinical mental health disorders targeted at service users. The literature was reviewed and resources for the implementation of clinical guidelines were compiled using the PRISMA model. A mixed qualitative and quantitative analysis was performed based on a series of categories developed ad hoc. A total of 263 items were included in the preliminary analysis and 64 implementation resources aimed at users were analysed in depth. A wide variety of types, sources and formats were identified, including guides (40%), websites (29%), videos and leaflets, as well as instruments for the implementation of strategies regarding information and education (64%), self-care, or users' assessment of service quality. The results reveal the need to establish clear criteria for assessing the quality of implementation materials in general and standardising systems to classify user-targeted strategies. The compilation and description of key elements of strategies and resources for users can be of interest in designing materials and specific actions for this target audience, as well as improving the implementation of clinical guidelines.
A package of Linux scripts for the parallelization of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Badal, Andreu; Sempau, Josep
2006-09-01
Despite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of "clone" CPUs is governed by a "master" computer by taking advantage of the capabilities of the Secure Shell (ssh) protocol. Any Linux computer on the Internet that can be ssh-accessed by the user can be used as a clone. A key ingredient for the parallel calculation to be reliable is the availability of an independent string of random numbers for each CPU. Many generators—such as RANLUX, RANECU or the Mersenne Twister—can readily produce these strings by initializing them appropriately and, hence, they are suitable to be used with clonEasy. This work was primarily motivated by the need to find a straightforward way to parallelize PENELOPE, a code for MC simulation of radiation transport that (in its current 2005 version) employs the generator RANECU, which uses a combination of two multiplicative linear congruential generators (MLCGs). Thus, this paper is focused on this class of generators and, in particular, we briefly present an extension of RANECU that increases its period up to ˜5×10 and we introduce seedsMLCG, a tool that provides the information necessary to initialize disjoint sequences of an MLCG to feed different CPUs. This program, in combination with clonEasy, allows to run PENELOPE in parallel easily, without requiring specific libraries or significant alterations of the sequential code. Program summary 1Title of program:clonEasy Catalogue identifier:ADYD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYD_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a Unix style shell (bash), support for the Secure Shell protocol and a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1) Compilers:GNU FORTRAN g77 (Linux); g95 (Linux); Intel Fortran Compiler 7.1 (Linux) Programming language used:Linux shell (bash) script, FORTRAN 77 No. of bits in a word:32 No. of lines in distributed program, including test data, etc.:1916 No. of bytes in distributed program, including test data, etc.:18 202 Distribution format:tar.gz Nature of the physical problem:There are many situations where a Monte Carlo simulation involves a huge amount of CPU time. The parallelization of such calculations is a simple way of obtaining a relatively low statistical uncertainty using a reasonable amount of time. Method of solution:The presented collection of Linux scripts and auxiliary FORTRAN programs implement Secure Shell-based communication between a "master" computer and a set of "clones". The aim of this communication is to execute a code that performs a Monte Carlo simulation on all the clones simultaneously. The code is unique, but each clone is fed with a different set of random seeds. Hence, clonEasy effectively permits the parallelization of the calculation. Restrictions on the complexity of the program:clonEasy can only be used with programs that produce statistically independent results using the same code, but with a different sequence of random numbers. Users must choose the initialization values for the random number generator on each computer and combine the output from the different executions. A FORTRAN program to combine the final results is also provided. Typical running time:The execution time of each script largely depends on the number of computers that are used, the actions that are to be performed and, to a lesser extent, on the network connexion bandwidth. Unusual features of the program:Any computer on the Internet with a Secure Shell client/server program installed can be used as a node of a virtual computer cluster for parallel calculations with the sequential source code. The simplicity of the parallelization scheme makes the use of this package a straightforward task, which does not require installing any additional libraries. Program summary 2Title of program:seedsMLCG Catalogue identifier:ADYE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYE_v1_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, Northern Ireland Computer for which the program is designed and others in which it is operable:Any computer with a FORTRAN compiler Operating systems under which the program has been tested:Linux (RedHat 8.0, SuSe 8.1, Debian Woody 3.1), MS Windows (2000, XP) Compilers:GNU FORTRAN g77 (Linux and Windows); g95 (Linux); Intel Fortran Compiler 7.1 (Linux); Compaq Visual Fortran 6.1 (Windows) Programming language used:FORTRAN 77 No. of bits in a word:32 Memory required to execute with typical data:500 kilobytes No. of lines in distributed program, including test data, etc.:492 No. of bytes in distributed program, including test data, etc.:5582 Distribution format:tar.gz Nature of the physical problem:Statistically independent results from different runs of a Monte Carlo code can be obtained using uncorrelated sequences of random numbers on each execution. Multiplicative linear congruential generators (MLCG), or other generators that are based on them such as RANECU, can be adapted to produce these sequences. Method of solution:For a given MLCG, the presented program calculates initialization values that produce disjoint, consecutive sequences of pseudo-random numbers. The calculated values initiate the generator in distant positions of the random number cycle and can be used, for instance, on a parallel simulation. The values are found using the formula S=(aS)MODm, which gives the random value that will be generated after J iterations of the MLCG. Restrictions on the complexity of the program:The 32-bit length restriction for the integer variables in standard FORTRAN 77 limits the produced seeds to be separated a distance smaller than 2 31, when the distance J is expressed as an integer value. The program allows the user to input the distance as a power of 10 for the purpose of efficiently splitting the sequence of generators with a very long period. Typical running time:The execution time depends on the parameters of the used MLCG and the distance between the generated seeds. The generation of 10 6 seeds separated 10 12 units in the sequential cycle, for one of the MLCGs found in the RANECU generator, takes 3 s on a 2.4 GHz Intel Pentium 4 using the g77 compiler.
NASA Technical Reports Server (NTRS)
Rosenfeld, Arie; Hinkle, C. Ross; Epstein, Marc
2002-01-01
This ST1 Technical Memorandum (TM) summarizes a two-month project on feral hog management in Merritt Island National Wildlife Refuge (MINWR). For this project, feral hogs were marked and recaptured, with the help of local trappers, to estimate population size and habitat preferences. Habitat covers included vegetation cover and Light Detection and Ranging (LIDAR) data for MINWR. In addition, an analysis was done of hunting records compiled by the Refuge and hog-car accidents compiled by KSC Security.
A Code Generation Approach for Auto-Vectorization in the Spade Compiler
NASA Astrophysics Data System (ADS)
Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung
We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.
NASA Technical Reports Server (NTRS)
Ledoux, F. N.
1973-01-01
A compilation of engineering design tests which were conducted in support of the Energetic Particle Satellite S-3, S-3A, and S-3b programs. The purpose for conducting the tests was to determine the adequacy and reliability of the Energetic Particles Series of satellites designs. The various tests consisted of: (1) moments of inertia, (2) functional reliability, (3) component and structural integrity, (4) initiators and explosives tests, and (5) acceptance tests.
A Language-Based Approach To Wireless Sensor Network Security
2014-03-06
128 – RPC 119 7.0 Secure RPC 87 32.0 Figure 1: SpartanRPC Memory Overhead (L) and Impact on Messaging (R) Figure 2: Scalaness /nesT Compilation and...language for developing real WSN applica- tions. This language, called Scalaness /nesT, extends Scala with staging features for executing programs on hubs...particular note here is the fact that cross-stage type safety of Scalaness source code ensures that compiled bytecode can be deployed to, and run on
1991-05-31
benchmarks ............ .... . .. .. . . .. 220 Appendix G : Source code of the Aquarius Prolog compiler ........ . 224 Chapter I Introduction "You’re given...notation, a tool that is used throughout the compiler’s implementation. Appendix F lists the source code of the C and Prolog benchmarks. Appendix G lists the...source code of the compilcr. 5 "- standard form Prolog / a-sfomadon / head umrvln Convert to tmeikernel Prol g vrans~fonaon 1symbolic execution
Data Immersion for CCNY Undergraduate Summer Interns at the IEDA Geoinformatics Facility
NASA Astrophysics Data System (ADS)
Uribe, R.; Van Wert, T.; Alabi, T.
2016-12-01
National Science Foundation (NSF) funded programs that provide grants and resources to enhance undergraduate learning and provide a pathway to future career opportunities in the geosciences by increasing retention and broadening participation. In an increasingly digital world, geoinformatics and the importance of large data storage and accessibility is a rapidly expanding field in the geosciences. The NSF-funded Interdisciplinary Earth Data Alliance (IEDA) - City College of New York (CCNY) summer internship program aims to provide diverse undergraduates from CCNY with data processing experience within the IEDA facility at Columbia University's Lamont-Doherty Earth Observatory (LDEO). CCNY interns worked alongside IEDA mentors and were immersed in the day-to-day operations of the IEDA facility. Skills necessary to work with geoscience data were developed throughout the internship and participation with the broader cohort of Lamont summer interns was promoted. Summer lectures delivered by researchers at LDEO provided interns with cutting-edge geoscience content from experts across a wide range of fields in the Earth sciences. CCNY undergraduate interns undertook two data compilation projects. First, interns compiled LiDAR land elevation data to enhance the land-ocean base map used across IEDA map-based resources. For that, the interns downloaded and classified one- and three-meter resolution LiDAR topographic data from the USGS The National Mapfor the lower 48 states. Second, computer-derived regional and global seismic tomography models from the Incorporated Research Institutions for Seismology (IRIS) were compiled and processed for integration with GeoMapApp, a free mapping application developed at LDEO (www.geomapapp.org). Interns established a data processing workflow to extract tomographic depth slices from dozens of tomographic grids. Executing LINUX commands and shell scripts, the native format binary netCDF files were resampled and reformatted and compared to the published figures to check for consistency. The extracted tomographic slices will be included in GeoMapApp's user friendly map-based interface. The IEDA-CCNY internship encouraged students to develop and build basic skills necessary for the rigors of graduate study and real world geoscience career exposure.
Conversion of HSPF Legacy Model to a Platform-Independent, Open-Source Language
NASA Astrophysics Data System (ADS)
Heaphy, R. T.; Burke, M. P.; Love, J. T.
2015-12-01
Since its initial development over 30 years ago, the Hydrologic Simulation Program - FORTAN (HSPF) model has been used worldwide to support water quality planning and management. In the United States, HSPF receives widespread endorsement as a regulatory tool at all levels of government and is a core component of the EPA's Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) system, which was developed to support nationwide Total Maximum Daily Load (TMDL) analysis. However, the model's legacy code and data management systems have limitations in their ability to integrate with modern software, hardware, and leverage parallel computing, which have left voids in optimization, pre-, and post-processing tools. Advances in technology and our scientific understanding of environmental processes that have occurred over the last 30 years mandate that upgrades be made to HSPF to allow it to evolve and continue to be a premiere tool for water resource planners. This work aims to mitigate the challenges currently facing HSPF through two primary tasks: (1) convert code to a modern widely accepted, open-source, high-performance computing (hpc) code; and (2) convert model input and output files to modern widely accepted, open-source, data model, library, and binary file format. Python was chosen as the new language for the code conversion. It is an interpreted, object-oriented, hpc code with dynamic semantics that has become one of the most popular open-source languages. While python code execution can be slow compared to compiled, statically typed programming languages, such as C and FORTRAN, the integration of Numba (a just-in-time specializing compiler) has allowed this challenge to be overcome. For the legacy model data management conversion, HDF5 was chosen to store the model input and output. The code conversion for HSPF's hydrologic and hydraulic modules has been completed. The converted code has been tested against HSPF's suite of "test" runs and shown good agreement and similar execution times while using the Numba compiler. Continued verification of the accuracy of the converted code against more complex legacy applications and improvement upon execution times by incorporating an intelligent network change detection tool is currently underway, and preliminary results will be presented.
Creating the Action Model for High Risk Infant Follow Up Program in Iran.
Heidarzadeh, Mohammad; Jodiery, Behzad; Mirnia, Kayvan; Akrami, Forouzan; Hosseini, Mohammad Bagher; Heidarabadi, Seifollah; HabibeLahi, Abbas
2013-11-01
Intervention in early childhood development as one of the social determinants of health, is important for reducing social gap and inequity. In spite of increasingly developing intensive neonatal care wards and decreasing neonatal mortality rate, there is no follow up program in Iran. This study was carreid out to design high risk infants follow up care program with the practical aim of creating an model action for whole country, in 2012. This qualitative study has been done by the Neonatal Department of the Deputy of Public Health in cooperation with Pediatrics Health Research Center of Tabriz University of Medical Sciences, Iran. After study of international documents, consensus agreement about adapted program for Iran has been accomplished by focus group discussion and attended Delphi agreement technique. After compiling primary draft included evidence based guidelines and executive plan, 14 sessions including expert panels were hold to finalize the program. After finalizing the program, high risk infants follow up care service package has been designed in 3 chapters: Evidence based clinical guidelines; eighteen main clinical guidelines and thirteen subsidiaries clinical guidelines, executive plan; 6 general, 6 following up and 5 backup processes. Education program including general and especial courses for care givers and follow up team, and family education processes. We designed and finalized high risk infants follow up care service package. It seems to open a way to extend it to whole country.
Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis
Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert
2013-01-01
Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290
NASA historical data book. Volume 2: Programs and projects 1958-1968
NASA Technical Reports Server (NTRS)
Ezell, Linda Neuman
1988-01-01
This is Volume 2, Programs and Projects 1958-1968, of a multi-volume series providing a 20-year compilation of summary statistical and other data descriptive of NASA's programs in aeronautics and manned and unmanned spaceflight. This series is an important component of NASA published historical reference works, used by NASA personnel, managers, external researchers, and other government agencies.
ERIC Educational Resources Information Center
Wertheim, Sally H.; And Others
The puposes of the study are: (1) to provide a description of alternative programs within public high schools, (2) to compile a written history of these programs, (3) to provide information necessary to compare innovations in alternative schools within and without public school systems, and (4) to collect and disseminate information about…
NASA historical data book. Volume 3: Programs and projects 1969-1978
NASA Technical Reports Server (NTRS)
Ezell, Linda Neuman
1988-01-01
This is Volume 3, Programs and Projects 1969-1978, of a multi-volume series providing a 20-year compilation of summary statistical and other data descriptive of NASA's programs in aeronautics and manned and unmanned spaceflight. This series is an important component of NASA published historical reference works, used by NASA personnel, managers, external researchers, and other government agencies.
LBNL Laboratory Directed Research and Development Program FY2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, D.
2017-03-01
The Berkeley Lab Laboratory Directed Research and Development Program FY2016 report is compiled from annual reports submitted by principal investigators following the close of the fiscal year. This report describes the supported projects and summarizes their accomplishments. It constitutes a part of the LDRD program planning and documentation process that includes an annual planning cycle, project selection, implementation and review.
Preserving Old Ways the Modern Way: Red Crow Uses GIS, GPS to Document Traditional Knowledge
ERIC Educational Resources Information Center
Fat, Mary Weasel
2004-01-01
The article report that Red Crow Community College has created a unique, one-year certificate program that will train students to compile and document Kainai traditional knowledge. The program called the First Nations' Land Use Certificate Program accepted its first 16 students in January 2004 at the college, which is located on the Blood Reserve…
Institute on Human Values in Medicine. Reports of the Institute Fellows. 1973-74.
ERIC Educational Resources Information Center
Society for Health and Human Values, Philadelphia, PA.
This document is a compilation of reports of persons involved in the fellowship program offered by the Institute of Health and Human Values. The fellowship program centers around recognition of a need to support faculty development so that appropriately trained people can be available for emerging programs that teach human values as part of health…
2014 Water Power Program Peer Review: Hydropower Technologies, Compiled Presentations (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This document represents a collection of all presentations given during the EERE Wind and Water Power Program's 2014 Hydropower Peer Review. The purpose of the meeting was to evaluate DOE-funded hydropower and marine and hydrokinetic R&D projects for their contribution to the mission and goals of the Water Power Program and to assess progress made against stated objectives.
Infrastructure for Rapid Development of Java GUI Programs
NASA Technical Reports Server (NTRS)
Jones, Jeremy; Hostetter, Carl F.; Wheeler, Philip
2006-01-01
The Java Application Shell (JAS) is a software framework that accelerates the development of Java graphical-user-interface (GUI) application programs by enabling the reuse of common, proven GUI elements, as distinguished from writing custom code for GUI elements. JAS is a software infrastructure upon which Java interactive application programs and graphical user interfaces (GUIs) for those programs can be built as sets of plug-ins. JAS provides an application- programming interface that is extensible by application-specific plugins that describe and encapsulate both specifications of a GUI and application-specific functionality tied to the specified GUI elements. The desired GUI elements are specified in Extensible Markup Language (XML) descriptions instead of in compiled code. JAS reads and interprets these descriptions, then creates and configures a corresponding GUI from a standard set of generic, reusable GUI elements. These elements are then attached (again, according to the XML descriptions) to application-specific compiled code and scripts. An application program constructed by use of JAS as its core can be extended by writing new plug-ins and replacing existing plug-ins. Thus, JAS solves many problems that Java programmers generally solve anew for each project, thereby reducing development and testing time.
Mashburn, Shana L.; Winton, Kimberly T.
2010-01-01
This CD-ROM contains spatial datasets that describe natural and anthropogenic features and county-level estimates of agricultural pesticide use and pesticide data for surface-water, groundwater, and biological specimens in the state of Oklahoma. County-level estimates of pesticide use were compiled from the Pesticide National Synthesis Project of the U.S. Geological Survey, National Water-Quality Assessment Program. Pesticide data for surface water, groundwater, and biological specimens were compiled from U.S. Geological Survey National Water Information System database. These spatial datasets that describe natural and manmade features were compiled from several agencies and contain information collected by the U.S. Geological Survey. The U.S. Geological Survey datasets were not collected specifically for this compilation, but were previously collected for projects with various objectives. The spatial datasets were created by different agencies from sources with varied quality. As a result, features common to multiple layers may not overlay exactly. Users should check the metadata to determine proper use of these spatial datasets. These data were not checked for accuracy or completeness. If a question of accuracy or completeness arise, the user should contact the originator cited in the metadata.
School-to-Work Transition for Handicapped Youth: Perspectives on Educational and Economic Trends.
ERIC Educational Resources Information Center
Repetto, Jeanne B., Ed.
This compilation of papers focuses on the economic and educational considerations required for planning transitional services for handicapped youth, and was developed from the second and third annual forums sponsored by the Transitional Programming for Handicapped Youth: Interdisciplinary Leadership Preparation Program at the University of…
BALANCER: A Computer Program for Balancing Chemical Equations.
ERIC Educational Resources Information Center
Jones, R. David; Schwab, A. Paul
1989-01-01
Describes the theory and operation of a computer program which was written to balance chemical equations. Software consists of a compiled file of 46K for use under MS-DOS 2.0 or later on IBM PC or compatible computers. Additional specifications of courseware and availability information are included. (Author/RT)
40 CFR 68.48 - Safety information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 16 2013-07-01 2013-07-01 false Safety information. 68.48 Section 68...) CHEMICAL ACCIDENT PREVENTION PROVISIONS Program 2 Prevention Program § 68.48 Safety information. (a) The owner or operator shall compile and maintain the following up-to-date safety information related to the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cort, K. A.; Hostick, D. J.; Belzer, D. B.
This report compiles information and conclusions gathered as part of the “Modeling EERE Deployment Programs” project. The purpose of the project was to identify and characterize the modeling of deployment programs within the EERE Technology Development (TD) programs, address possible improvements to the modeling process, and note gaps in knowledge in which future research is needed.
Early School Admissions Program: Staff Handbook. Revised Edition.
ERIC Educational Resources Information Center
Grant, Mabel; And Others
The descriptions and procedures in this handbook were developed and compiled at the request of staff members of the Early School Admissions Program. It was felt that specific information relating to the suggested use of classroom materials and equipment would assist in upgrading teaching techniques, planning cognitively based learning experiences,…
Bibliographic Instruction, Vermont Libraries. A Directory of Programs and Methods.
ERIC Educational Resources Information Center
Johnson State Coll., VT.
Compiled from survey forms distributed to bibliographic instruction librarians in academic and special libraries in the spring of 1987, this directory includes information on the bibliographic instruction programs and methods of 17 Vermont universities and colleges listed according to the following metropolitan areas: (1) Bennington (Southern…
COMPILATION OF SATURATED AND UNSATURATED ZONE MODELING SOFTWARE
The full report provides readers an overview of available ground-water modeling programs and related software. It is an update of EPA/600/R-93/118 and EPA/600/R-94/028, two previous reports from the same program at the International Ground Water Modeling Center (IGWMC) in Colora...
A Rather Intelligent Language Teacher.
ERIC Educational Resources Information Center
Cerri, Stefano; Breuker, Joost
1981-01-01
Characteristics of DART (Didactic Augmented Recursive Transition), an ATN-based system for writing intelligent computer assisted instruction (ICAI) programs that is available on the PLATO system are described. DART allows writing programs in an ATN dialect, compiling them in machine code for the PLATO system, and executing them as if the original…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-28
... participating in the WIC Program; increased emphasis on breastfeeding promotion and support; compiling and... the support and promotion of breastfeeding. WIC has historically promoted breastfeeding to all... promotion and support of breastfeeding as an integral element of WIC services and benefits. The specific...
34 CFR 601.10 - Preferred lender arrangement disclosures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Loan (FFEL) Program from any eligible lender the student selects; and (2) On such covered institution's... interest rates, or other terms and conditions or provisions of Title IV, HEA program loans or private... loyalty to compile the preferred lender list under paragraph (d) of this section without prejudice and for...
A DIRECTORY OF GRADUATE PROGRAMS IN ADULT EDUCATION, COMPILED AS OF JANUARY 1968.
ERIC Educational Resources Information Center
THOMAS, ALAN M., ED.
A DIRECTORY IS PRESENTED OF GRADUATE PROGRAMS IN ADULT EDUCATION (INTERPRETED TO INCLUDE AGRICULTURAL EXTENSION, RURAL AND URBAN LEADERSHIP TRAINING, LABOR EDUCATION, INDUSTRIAL TRAINING, COOPERATIVE EDUCATION, AND COMMUNITY DEVELOPMENT) IN CANADA, THE UNITED STATES, GREAT BRITAIN, AND THE COMMONWEALTH AT LARGE. THE DEGREES OR CERTIFICATES…
Cognitive and Neural Sciences Division 1990 Programs.
ERIC Educational Resources Information Center
Vaughan, Willard S., Jr., Ed.
Research and development efforts carried out under sponsorship of the Cognitive and Neural Sciences Division of the Office of Naval Research during fiscal year 1990 are described in this compilation of project description summaries. The Division's research is organized in three types of programs: (1) Cognitive Science (the human learner--cognitive…
Involving Volunteers in Your Advancement Programs. The Best of "CASE Currents."
ERIC Educational Resources Information Center
Smith, Virginia Carter, Ed.; Alberger, Patricia LaSalle, Ed.
A compilation of the best articles from "CASE Currents" on involving volunteers in institutional advancement programs is presented. Overall topics include: management of volunteers, working with trustees (volunteers at the top), benefits of participation for volunteers, and involving volunteers in fund raising, public relations, student…
Timber management and use-value assessment
Paul E. Sendak; Neil K. Huyler
1994-01-01
Describes timber management activity and estimates timber harvest from forest land enrolled in Vermont's Use Value Appraisal (UVA) Forest Land property tax program. Data were compiled from the mandatory management plans and annual conformance reports filed for each property enrolled in the Program. Overall, 31 percent of the UVA properties reported a commercial...
Ayn, Caitlyn; Robinson, Lynne; Nason, April; Lovas, John
2017-04-01
Professional communication skills have a significant impact on dental patient satisfaction and health outcomes. Communication skills training has been shown to improve the communication skills of dental students. Therefore, strengthening communication skills training in dental education shows promise for improving dental patient satisfaction and outcomes. The aim of this study was to facilitate the development of dental communication skills training through a scoping review with compilation of a list of considerations, design of an example curriculum, and consideration of barriers and facilitators to adoption of such training. A search to identify studies of communication skills training interventions and programs was conducted. Search queries were run in three databases using both text strings and controlled terms (MeSH), yielding 1,833 unique articles. Of these, 35 were full-text reviewed, and 17 were included in the final synthesis. Considerations presented in the articles were compiled into 15 considerations. These considerations were grouped into four themes: the value of communication skills training, the role of instructors, the importance of accounting for diversity, and the structure of communication skills training. An example curriculum reflective of these considerations is presented, and consideration of potential barriers and facilitators to implementation are discussed. Application and evaluation of these considerations are recommended in order to support and inform future communication skills training development.
Adapting GNU random forest program for Unix and Windows
NASA Astrophysics Data System (ADS)
Jirina, Marcel; Krayem, M. Said; Jirina, Marcel, Jr.
2013-10-01
The Random Forest is a well-known method and also a program for data clustering and classification. Unfortunately, the original Random Forest program is rather difficult to use. Here we describe a new version of this program originally written in Fortran 77. The modified program in Fortran 95 needs to be compiled only once and information for different tasks is passed with help of arguments. The program was tested with 24 data sets from UCI MLR and results are available on the net.
FORTRAN Programs for Aerodynamic Analyses on the Microvax/2000 CAD CAE Workstation
1988-09-01
file exists, you must compile the program by typing, FOR DUBLET [Returni The next step is to link the program by entering, LINK DUBLET [Return] The...files DUBLET.EXE and DUBLET.OBJ will now exist and you will be able to run the program. Running the Program To run the program, type DUBLET [Return...by entering 0.1 [Return] Now enter the number of intervals you desire the doublet distribution to have by enter- ing 10 [Return] The screen should now
Computer programs: Operational and mathematical, a compilation
NASA Technical Reports Server (NTRS)
1973-01-01
Several computer programs which are available through the NASA Technology Utilization Program are outlined. Presented are: (1) Computer operational programs which can be applied to resolve procedural problems swiftly and accurately. (2) Mathematical applications for the resolution of problems encountered in numerous industries. Although the functions which these programs perform are not new and similar programs are available in many large computer center libraries, this collection may be of use to centers with limited systems libraries and for instructional purposes for new computer operators.
NASA Astrophysics Data System (ADS)
Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard
2017-07-01
Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.
AUTO_DERIV: Tool for automatic differentiation of a Fortran code
NASA Astrophysics Data System (ADS)
Stamatiadis, S.; Farantos, S. C.
2010-10-01
AUTO_DERIV is a module comprised of a set of FORTRAN 95 procedures which can be used to calculate the first and second partial derivatives (mixed or not) of any continuous function with many independent variables. The mathematical function should be expressed as one or more FORTRAN 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the FORTRAN 95 language is extensively used to define the differentiation rules. Proper (standard complying) handling of floating-point exceptions is provided by using the IEEE_EXCEPTIONS intrinsic module (Technical Report 15580, incorporated in FORTRAN 2003). New version program summaryProgram title: AUTO_DERIV Catalogue identifier: ADLS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADLS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2963 No. of bytes in distributed program, including test data, etc.: 10 314 Distribution format: tar.gz Programming language: Fortran 95 + (optionally) TR-15580 (Floating-point exception handling) Computer: all platforms with a Fortran 95 compiler Operating system: Linux, Windows, MacOS Classification: 4.12, 6.2 Catalogue identifier of previous version: ADLS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 127 (2000) 343 Does the new version supersede the previous version?: Yes Nature of problem: The need to calculate accurate derivatives of a multivariate function frequently arises in computational physics and chemistry. The most versatile approach to evaluate them by a computer, automatically and to machine precision, is via user-defined types and operator overloading. AUTO_DERIV is a Fortran 95 implementation of them, designed to evaluate the first and second derivatives of a function of many variables. Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common restrictions of available memory and the capabilities of the compiler are the same as the original version. Additional comments: The program has been tested using the following compilers: Intel ifort, GNU gfortran, NAGWare f95, g95. Running time: The typical running time for the program depends on the compiler and the complexity of the differentiated function. A rough estimate is that AUTO_DERIV is ten times slower than the evaluation of the analytical ('by hand') function value and derivatives (if they are available). References:S. Stamatiadis, R. Prosmiti, S.C. Farantos, AUTO_DERIV: tool for automatic differentiation of a Fortran code, Comput. Phys. Comm. 127 (2000) 343.
Water Quality Standards Handbook
The Water Quality Standards Handbook is a compilation of the EPA's water quality standards (WQS) program guidance including recommendations for states, authorized tribes, and territories in reviewing, revising, and implementing WQS.
Multithreaded transactions in scientific computing. The Growth06_v2 program
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2009-07-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronization, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents a new version of the GROWTHGr and GROWTH06 programs. New version program summaryProgram title: GROWTH06_v2 Catalogue identifier: ADVL_v2_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 65 255 No. of bytes in distributed program, including test data, etc.: 865 985 Distribution format: tar.gz Programming language: Object Pascal Computer: Pentium-based PC Operating system: Windows 9x, XP, NT, Vista RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Catalogue identifier of previous version: ADVL_v2_0 Journal reference of previous version: Comput. Phys. Comm. 175 (2006) 678 Does the new version supersede the previous version?: Yes Nature of problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory. Solution method: Epitaxial growth of thin films is modelled by a set of non-linear differential equations [1]. The Runge-Kutta method with adaptive stepsize control was used for solving initial value problem for non-linear differential equations [2]. Reasons for new version: According to the users' suggestions functionality of the program has been improved. Moreover, new use cases have been added which make the handling of the program easier and more efficient than the previous ones [3]. Summary of revisions:The design pattern (See Fig. 2 of Ref. [3]) has been modified according to the scheme shown on Fig. 1. A graphical user interface (GUI) for the program has been reconstructed. Fig. 2 presents a hybrid diagram of a GUI that shows how onscreen objects connect to use cases. The program has been compiled with English/USA regional and language options. Note: The figures mentioned above are contained in the program distribution file. Unusual features: The program is distributed in the form of source project GROWTH06_v2.dpr with associated files, and should be compiled using Borland Delphi compilers versions 6 or latter (including Borland Developer Studio 2006 and Code Gear compilers for Delphi). Additional comments: Two figures are included in the program distribution file. These are captioned Static classes model for Transaction design pattern. A model of a window that shows how onscreen objects connect to use cases. Running time: The typical running time is machine and user-parameters dependent. References: [1] A. Daniluk, Comput. Phys. Comm. 170 (2005) 265. [2] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling, Numerical Recipes in Pascal: The Art of Scientific Computing, first ed., Cambridge University Press, 1989. [3] M. Brzuszek, A. Daniluk, Comput. Phys. Comm. 175 (2006) 678.
NASA Astrophysics Data System (ADS)
Wang, G.; Wang, D.; Zhou, W.; Chen, M.; Zhao, T.
2018-04-01
The research and compilation of new century version of the National Huge Atlas of the People's Republic of China is the special basic work project by Ministry of Science and Technology of the People's Republic of China. Among them, the research and compilation of the National Geomatics Atlas of the People's Republic of China is its main content. The National Geomatics Atlas of China consists of 4 groups of maps and place name index. The 4 groups of maps are separately nationwide thematic map group, provincial fundamental geographical map group, landcover map group and city map group. The city map group is an important component part of the National Geomatics Atlas of China and mainly shows the process of urbanization in China. This paper, aim at design and compilation of 39 city-wide maps, briefly introduces mapping area research and scale design, mapping technical route, content selection and cartographic generalization, symbol design and visualization of map, etc.
A Computer for Low Context-Switch Time
1990-03-01
Results To find out how an implementation performs, we use a set of programs that make up a simulation system. These programs compile C language programs ...have worse relative context-switch performance: the time needed to switch contexts has not de- creased as much as the time to run programs . Much of...this study is: How seriously is throughput performance im- paired by this approach to computer architecture? Reasonable estimates are possible only
NASA Technical Reports Server (NTRS)
Boytos, Matthew A.; Norbury, John W.
1992-01-01
The authors of this paper have provided a set of ready-to-run FORTRAN programs that should be useful in the field of theoretical nuclear physics. The purpose of this document is to provide a simple synopsis of the programs and their use. A separate section is devoted to each program set and includes: abstract; files; compiling, linking, and running; obtaining results; and a tutorial.
Compile-Time Partitioning and Scheduling of Parallel Programs. Extended Summary,
1986-01-01
OO-MI70 9PROGRAMS EXTENED, SUMNNRY(U) STANFORD, UNIV CA COMPUTERSYSTEMS LAO V SARKAR ET AL. L986 MDA9S3-SS-C-S432 UNCLASSIFIEDj F/ G 9/2 H El- 1 9 5...9 C M E h h h" E P RIIN N E O UI G O Fh E L i E Eu Iwle ui J l~I-O IWI INW 2-5 1= 13.111 2-2 l o U l1 . A 12- "m ’- - "- m°" m ’o ’ l ’. , " l...J. A. et al. "Parallel Processing: A Smart Compiler and a Dumb Machine". SIGPLAN Notices 19, 6 (June 1984). 8. Gajski , D. D., Padua, D. K. & Kuck, D
ERIC Educational Resources Information Center
Hawkins, Mary
The National Center for Missing and Exploited Children (NCMEC) compiled this guide for schools, community groups, and individuals who are choosing programs that teach personal safety to children. A task force of eight other organizations contributed to the guide. The guide defines child victimization as sexual abuse and assault, abduction,…
International nuclear fuel cycle fact book. Revision 6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, K.M.; Lakey, L.T.; Leigh, I.W.
1986-01-01
The International Fuel Cycle Fact Book has been compiled in an effort to provide (1) an overview of worldwide nuclear power and fuel cycle programs and (2) current data concerning fuel cycle and waste management facilities, R and D programs and key personnel. Additional information on each country's program is available in the International Source Book: Nuclear Fuel Cycle Research and Development, PNL-2478, Rev. 2.
1989-08-01
Programming Languages Used: AUTOCAD Command, AUTOLISP Type of Commercial Program Used: CAD Specific Commercial Program Used: AUTOCAD Version: 1.0...collection which the system can directly translate into printed reports. This eliminates the need for filling data collection forms and manual compiling of
NASA Technical Reports Server (NTRS)
Engle, H. A.; Christensen, D. L.
1975-01-01
The development and application of educational programs to improve public awareness of the space shuttle/space lab capabilities are reported. Special efforts were made to: identify the potential user, identify and analyze space education programs, plan methods for user involvement, develop techniques and programs to encourage new users, and compile follow-on ideas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2014-02-01
This document represents a collection of all presentations given during the EERE Wind and Water Power Program's 2014 Marine and Hydrokinetic Peer Review. The purpose of the meeting was to evaluate DOE-funded hydropower and marine and hydrokinetic R&D projects for their contribution to the mission and goals of the Water Power Program and to assess progress made against stated objectives.
First NASA Aviation Safety Program Weather Accident Prevention Project Annual Review
NASA Technical Reports Server (NTRS)
Colantonio, Ron
2000-01-01
The goal of this Annual Review was to present NASA plans and accomplishments that will impact the national aviation safety goal. NASA's WxAP Project focuses on developing the following products: (1) Aviation Weather Information (AWIN) technologies (displays, sensors, pilot decision tools, communication links, etc.); (2) Electronic Pilot Reporting (E-PIREPS) technologies; (3) Enhanced weather products with associated hazard metrics; (4) Forward looking turbulence sensor technologies (radar, lidar, etc.); (5) Turbulence mitigation control system designs; Attendees included personnel from various NASA Centers, FAA, National Weather Service, DoD, airlines, aircraft and pilot associations, industry, aircraft manufacturers and academia. Attendees participated in discussion sessions aimed at collecting aviation user community feedback on NASA plans and R&D activities. This CD is a compilation of most of the presentations presented at this Review.
Reaching kids: partnering with preschools and schools to improve children's health.
2009-11-01
As part of its continuing mission to serve trustees and staff of health foundations and corporate giving programs, Grantmakers In Health (GIH) convened a group of grantmakers and education experts on May 27, 2009, for an informative discussion about ways in which preschools and schools are working to improve outcomes related to children's health. The Issue Dialogue Reaching Kids: Partnering with Preschools and Schools to Improve Children's Health synthesized the latest research on health-related issues affecting children's educational outcomes. It also provided illustrative examples of foundation-driven initiatives aimed at promoting collaborations between the health and education sectors to improve children's health and development outcomes. This Issue Brief summarizes background materials compiled for the meeting and highlights key themes and findings that emerged from the day's discussion among meeting participants.
LLNL electro-optical mine detection program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, C.; Aimonetti, W.; Barth, M.
1994-09-30
Under funding from the Advanced Research Projects Agency (ARPA) and the US Marine Corps (USMC), Lawrence Livermore National Laboratory (LLNL) has directed a program aimed at improving detection capabilities against buried mines and munitions. The program has provided a national test facility for buried mines in arid environments, compiled and distributed an extensive data base of infrared (IR), ground penetrating radar (GPR), and other measurements made at that site, served as a host for other organizations wishing to make measurements, made considerable progress in the use of ground penetrating radar for mine detection, and worked on the difficult problem ofmore » sensor fusion as applied to buried mine detection. While the majority of our effort has been concentrated on the buried mine problem, LLNL has worked with the U.S.M.C. on surface mine problems as well, providing data and analysis to support the COBRA (Coastal Battlefield Reconnaissance and Analysis) program. The original aim of the experimental aspect of the program was the utilization of multiband infrared approaches for the detection of buried mines. Later the work was extended to a multisensor investigation, including sensors other than infrared imagers. After an early series of measurements, it was determined that further progress would require a larger test facility in a natural environment, so the Buried Object Test Facility (BOTF) was constructed at the Nevada Test Site. After extensive testing, with sensors spanning the electromagnetic spectrum from the near ultraviolet to radio frequencies, possible paths for improvement were: improved spatial resolution providing better ground texture discrimination; analysis which involves more complicated spatial queueing and filtering; additional IR bands using imaging spectroscopy; the use of additional sensors other than IR and the use of data fusion techniques with multi-sensor data; and utilizing time dependent observables like temperature.« less
NASA Astrophysics Data System (ADS)
Halder, P.; Chakraborty, A.; Deb Roy, P.; Das, H. S.
2014-09-01
In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with a plan to create a large database in future. This application also has an option where users can compile and run the scattering code directly for aggregates in GUI environment. The JaSTA aims to provide convenient and quicker data analysis of the optical properties which can be used in different fields like planetary science, atmospheric science, nano science, etc. The current version of this software is developed for the Linux and Windows platform to study the light scattering properties of small aggregates which will be extended for larger aggregates using parallel codes in future. Catalogue identifier: AETB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 571570 No. of bytes in distributed program, including test data, etc.: 120226886 Distribution format: tar.gz Programming language: Java, Fortran95. Computer: Any Windows or Linux systems capable of hosting a java runtime environment, java3D and fortran95 compiler; Developed on 2.40 GHz Intel Core i3. Operating system: Any Windows or Linux systems capable of hosting a java runtime environment, java3D and fortran95 compiler. RAM: Ranging from a few Mbytes to several Gbytes, depending on the input parameters. Classification: 1.3. External routines: jfreechart-1.0.14 [1] (free plotting library for java), j3d-jre-1.5.2 [2] (3D visualization). Nature of problem: Optical properties of cosmic dust aggregates. Solution method: Java application based on Mackowski and Mischenko's Superposition T-Matrix code. Restrictions: The program is designed for single processor systems. Additional comments: The distribution file for this program is over 120 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Ranging from few minutes to several hours, depending on the input parameters. References: [1] http://www.jfree.org/index.html [2] https://java3d.java.net/
Exercises to Accompany Mathematics 301. Curriculum Support Series.
ERIC Educational Resources Information Center
Manitoba Dept. of Education, Winnipeg.
These sample problems, exercises, questions, and projects were compiled to supplement the guide for the Manitoba course Mathematics 301 in order to assist teachers in implementing the program. Arranged according to the modules of the course guide, they are coded to the objectives of the program. Review exercises follow either the subtopics within…
Academic Peer Instruction: Reference and Training Manual (with Answers)
ERIC Educational Resources Information Center
Zaritsky, Joyce; Toce, Andi
2013-01-01
This manual consists of an introduction to our Academic Peer Instruction (API) program at LaGuardia Community College, a compilation of the materials we have developed and use for training of our tutors (with answers), and a bibliography. API is based on an internationally recognized peer tutoring program, Supplemental Instruction. (Contains 6…
An Aid to Comprehensive Planning for Migrant Programs.
ERIC Educational Resources Information Center
Smith, Mona, Comp.
Designed as a guide for all personnel involved in migrant projects, this pamphlet is a compilation of references derived from presentations made at the New York State Migrant Program Directors Conference held at Victor, New York, November 29-December 1, 1972. A short description of agency services and a list of sources for further information…
1993 at a Glance: Executive Summaries of Reports from the Office of Research and Evaluation.
ERIC Educational Resources Information Center
Austin Independent School District, TX. Office of Research and Evaluation.
This compilation contains executive summaries of 13 program evaluations conducted by the Office of Research and Evaluation of the Austin Independent School District (AISD) (Texas), as well as short summary reports on 3 programs. The following summaries are included: (1) "1991-92 Dropout Report"; (2) "Faculty/Staff Recruitment…
A Source Book for Taxation: Myths and Realities.
ERIC Educational Resources Information Center
Hellman, Mary A.
This sourcebook is one of two supplementary materials for a newspaper course about taxes and tax reform. Program ideas and sources of related resources compiled in the sourcebook are designed to help civic and group leaders and educators plan educational community programs based on the course topics. Section one describes ways in which the program…
Science and Success: Clinical Services and Contraceptive Access
ERIC Educational Resources Information Center
Alford, Sue; Huberman, Barbara
2009-01-01
Despite recent declines in teen pregnancy, U.S. teen birth and sexually transmitted infection (STI) rates remain among the highest in the western world. Given the need to focus limited prevention resources on effective programs, Advocates for Youth undertook exhaustive reviews of existing research to compile a list of the programs proven effective…
Our Man-Made Environment. A Collection of Experiences, Resources and Suggested Activities.
ERIC Educational Resources Information Center
Group for Environmental Education, Philadelphia, PA.
This collection of activities, experiences, and resources focuses on the man-made environment. The activities and resources were compiled to facilitate a program based upon the teacher's and student's own living experiences in their own environment. The goals of the program are to develop the individual's awareness of his environment and…
Head Start Home-Based Resource Directory.
ERIC Educational Resources Information Center
Trans-Management Systems, Inc.
A revision of the 1989 publication, this directory was compiled in order to help parents and professionals involved with Head Start home-based programming in meeting the needs of young children and families. The directory lists a broad range of guides and resources on topics related to Head Start home-based programs. Each listing provides the…
Parallel Performance of a Combustion Chemistry Simulation
Skinner, Gregg; Eigenmann, Rudolf
1995-01-01
We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.
Profiling under UNIX by patching
NASA Technical Reports Server (NTRS)
Bishop, Matt
1986-01-01
Profiling under UNIX is done by inserting counters into programs either before or during the compilation or assembly phases. A fourth type of profiling involves monitoring the execution of a program, and gathering relevant statistics during the run. This method and an implementation of this method are examined, and its advantages and disadvantages are discussed.
Animal Health Technicians: A Survey of Program Graduates and of Veterinarians.
ERIC Educational Resources Information Center
Barsaleau, Richard B.; Walters, Henry R.
This document compiles the reports of two surveys conducted by Cosumnes River College to determine the status of graduates of its Animal Health Technician program, and to assess the acceptance and use of such paraprofessionals by area veterinarians. Information concerning type of employment, state certification, salaries, types of duties, length…
Benchmark Lisp And Ada Programs
NASA Technical Reports Server (NTRS)
Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.
1992-01-01
Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.
Federal Assistance for Programs Serving the Handicapped.
ERIC Educational Resources Information Center
Office of Human Development (DHEW), Washington, DC. Office for Handicapped Individuals.
Presented is information on approximately 100 Federal programs designed to assist the handicapped and/or people working with or for them. It is explained that the compilation was excerpted from the 1975 Catalog of Federal Domestic Assistance, gathered from material provided by the Library of Congress, and augmented by a survey of Federal agencies…
Industrial Arts Technology Bibliography; An Annotated Reference for Librarians.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Secondary Curriculum Development.
This compilation is designed to assist librarians in selecting books for supplementing the expanding program of industrial arts education. The books were selected for the major subject areas of a broad industrial arts program, on the basis of reflected interest of students, content, format, and readability. The format and coding used in the…
Hawaii Integrated Biofuels Research Program: Final Subcontract Report, Phase III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-05-01
This report is a compilation of studies done to develop an integrated set of strategies for the production of energy from renewable resources in Hawaii. Because of the close coordination between this program and other ongoing DOE research, the work will have broad-based applicability to the entire United States.
Yes, You Can Have a Health Service in a Community College.
ERIC Educational Resources Information Center
Busky, Henry F.
The health services program at Prince George's Community College is oriented toward preventative and educational services as well as referral and the treatment of minor illnesses and injuries. This compilation of statements of intent, forms, and brief descriptive essays covers various aspects of the program. The qualities of a college health…
Adventure Program Risk Management Report: 1998 Edition. Narratives and Data from 1991-1997.
ERIC Educational Resources Information Center
Leemon, Drew, Ed.; Schimelpfenig, Tod, Ed.; Gray, Sky, Ed.; Tarter, Shana, Ed.; Williamson, Jed, Ed.
The Wilderness Risk Managers Committee (WRMC), a consortium of outdoor schools and organizations, works toward better understanding and management of risks in the wilderness. Among other activities, the WRMC gathers data on incidents and accidents from member organizations and other wilderness-based programs. This book compiles incident data for…
Portfolios Are Replacing Qualifying Exams as a Step on the Road to Dissertations
ERIC Educational Resources Information Center
Wasley, Paula
2008-01-01
This article reports that some graduate programs are switching from comprehensive qualifying exams to portfolios compiled by doctoral candidates. Five years ago the graduate program at the University of Kansas' history department was like many others--filled with small cohorts of anxious, fearful procrastinators. Doctoral students were taking an…
An Inventory of Cocurricular Drama Programs in the Secondary Schools of Jefferson County, Kentucky.
ERIC Educational Resources Information Center
Hoover, Nancy Roahrig
In order to compile an inventory of secondary school cocurricular dramatics programs in the Jefferson County, Kentucky, public schools, eleven principals, eighteen teachers, and eighty students were randomly selected from thirteen high schools, five junior high schools, and five middle schools. Respondents completed questionnaires concerning the…
csa2sac—A program for computing discharge from continuous slope-area stage data
Wiele, Stephen M.
2015-12-17
In addition to csa2sac, the SAC7 program is required. It is the same as the original SAC program, except that it is compiled for 64-bit Windows operating systems and has a slightly different command line input. It is available online (http://water.usgs.gov/software/SAC/) as part of the SACGUI installation program. The program name, “SAC7.exe,” is coded into csa2sac, and must not be changed.
Functional Programming with C++ Template Metaprograms
NASA Astrophysics Data System (ADS)
Porkoláb, Zoltán
Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.
From Sea to Sea: Canada's Three Oceans of Biodiversity
Archambault, Philippe; Snelgrove, Paul V. R.; Fisher, Jonathan A. D.; Gagnon, Jean-Marc; Garbary, David J.; Harvey, Michel; Kenchington, Ellen L.; Lesage, Véronique; Levesque, Mélanie; Lovejoy, Connie; Mackas, David L.; McKindsey, Christopher W.; Nelson, John R.; Pepin, Pierre; Piché, Laurence; Poulin, Michel
2010-01-01
Evaluating and understanding biodiversity in marine ecosystems are both necessary and challenging for conservation. This paper compiles and summarizes current knowledge of the diversity of marine taxa in Canada's three oceans while recognizing that this compilation is incomplete and will change in the future. That Canada has the longest coastline in the world and incorporates distinctly different biogeographic provinces and ecoregions (e.g., temperate through ice-covered areas) constrains this analysis. The taxonomic groups presented here include microbes, phytoplankton, macroalgae, zooplankton, benthic infauna, fishes, and marine mammals. The minimum number of species or taxa compiled here is 15,988 for the three Canadian oceans. However, this number clearly underestimates in several ways the total number of taxa present. First, there are significant gaps in the published literature. Second, the diversity of many habitats has not been compiled for all taxonomic groups (e.g., intertidal rocky shores, deep sea), and data compilations are based on short-term, directed research programs or longer-term monitoring activities with limited spatial resolution. Third, the biodiversity of large organisms is well known, but this is not true of smaller organisms. Finally, the greatest constraint on this summary is the willingness and capacity of those who collected the data to make it available to those interested in biodiversity meta-analyses. Confirmation of identities and intercomparison of studies are also constrained by the disturbing rate of decline in the number of taxonomists and systematists specializing on marine taxa in Canada. This decline is mostly the result of retirements of current specialists and to a lack of training and employment opportunities for new ones. Considering the difficulties encountered in compiling an overview of biogeographic data and the diversity of species or taxa in Canada's three oceans, this synthesis is intended to serve as a biodiversity baseline for a new program on marine biodiversity, the Canadian Healthy Ocean Network. A major effort needs to be undertaken to establish a complete baseline of Canadian marine biodiversity of all taxonomic groups, especially if we are to understand and conserve this part of Canada's natural heritage. PMID:20824204
From sea to sea: Canada's three oceans of biodiversity.
Archambault, Philippe; Snelgrove, Paul V R; Fisher, Jonathan A D; Gagnon, Jean-Marc; Garbary, David J; Harvey, Michel; Kenchington, Ellen L; Lesage, Véronique; Levesque, Mélanie; Lovejoy, Connie; Mackas, David L; McKindsey, Christopher W; Nelson, John R; Pepin, Pierre; Piché, Laurence; Poulin, Michel
2010-08-31
Evaluating and understanding biodiversity in marine ecosystems are both necessary and challenging for conservation. This paper compiles and summarizes current knowledge of the diversity of marine taxa in Canada's three oceans while recognizing that this compilation is incomplete and will change in the future. That Canada has the longest coastline in the world and incorporates distinctly different biogeographic provinces and ecoregions (e.g., temperate through ice-covered areas) constrains this analysis. The taxonomic groups presented here include microbes, phytoplankton, macroalgae, zooplankton, benthic infauna, fishes, and marine mammals. The minimum number of species or taxa compiled here is 15,988 for the three Canadian oceans. However, this number clearly underestimates in several ways the total number of taxa present. First, there are significant gaps in the published literature. Second, the diversity of many habitats has not been compiled for all taxonomic groups (e.g., intertidal rocky shores, deep sea), and data compilations are based on short-term, directed research programs or longer-term monitoring activities with limited spatial resolution. Third, the biodiversity of large organisms is well known, but this is not true of smaller organisms. Finally, the greatest constraint on this summary is the willingness and capacity of those who collected the data to make it available to those interested in biodiversity meta-analyses. Confirmation of identities and intercomparison of studies are also constrained by the disturbing rate of decline in the number of taxonomists and systematists specializing on marine taxa in Canada. This decline is mostly the result of retirements of current specialists and to a lack of training and employment opportunities for new ones. Considering the difficulties encountered in compiling an overview of biogeographic data and the diversity of species or taxa in Canada's three oceans, this synthesis is intended to serve as a biodiversity baseline for a new program on marine biodiversity, the Canadian Healthy Ocean Network. A major effort needs to be undertaken to establish a complete baseline of Canadian marine biodiversity of all taxonomic groups, especially if we are to understand and conserve this part of Canada's natural heritage.
Patient adherence to prescribed antimicrobial drug dosing regimens.
Vrijens, Bernard; Urquhart, John
2005-05-01
The aim of this article is to review current knowledge about the clinical impact of patients' variable adherence to prescribed anti-infective drug dosing regimens, with the aim of renewing interest and exploration of this important but largely neglected area of therapeutics. Central to the estimation of a patient's adherence to a prescribed drug regimen is a reliably compiled drug dosing history. Electronic monitoring methods have emerged as the virtual 'gold standard' for compiling drug dosing histories in ambulatory patients. Reliably compiled drug dosing histories are consistently downwardly skewed, with varying degrees of under-dosing. In particular, the consideration of time intervals between protease inhibitor doses has revealed that ambulatory patients' variable execution of prescribed dosing regimens is a leading source of variance in viral response. Such analyses reveal the need for a new discipline, called pharmionics, which is the study of how ambulatory patients use prescription drugs. Properly analysed, reliable data on the time-course of patients' actual intake of prescription drugs can eliminate a major source of unallocated variance in drug responses, including the non-response that occurs and is easily misinterpreted when a patient's complete non-execution of a prescribed drug regimen is unrecognized clinically. As such, reliable compilation of ambulatory patients' drug dosing histories has the promise of being a key step in reducing unallocated variance in drug response and in improving the informational yield of clinical trials. It is also the basis for sound, measurement-guided steps taken to improve a patient's execution of a prescribed dosing regimen.
CROSSER - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.
High performance concrete bridges
DOT National Transportation Integrated Search
2000-08-01
This compilation of FHWA reports focuses on high performance concrete bridges. High performance concrete is described as concrete with enhanced durability and strength characteristics. Under the Strategic Highway Research Program (SHRP), more than 40...
Computer programs: Mechanical and structural design criteria: A compilation
NASA Technical Reports Server (NTRS)
1973-01-01
Computerized design criteria for turbomachinery and the constraints imposed by very high rotational fields are presented along with a variety of computerized design criteria of interest to structural designers.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under MVS/XA, host and target Completion...Joint Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation...in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record of the object code of
Reducing software security risk through an integrated approach
NASA Technical Reports Server (NTRS)
Gilliam, D.; Powell, J.; Kelly, J.; Bishop, M.
2001-01-01
The fourth quarter delivery, FY'01 for this RTOP is a Property-Based Testing (PBT), 'Tester's Assistant' (TA). The TA tool is to be used to check compiled and pre-compiled code for potential security weaknesses that could be exploited by hackers. The TA Instrumenter, implemented mostly in C++ (with a small part in Java), parsels two types of files: Java and TASPEC. Security properties to be checked are written in TASPEC. The Instrumenter is used in conjunction with the Tester's Assistant Specification (TASpec)execution monitor to verify the security properties of a given program.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.
Rubus: A compiler for seamless and extensible parallelism.
Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.
Rubus: A compiler for seamless and extensible parallelism
Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor
2017-01-01
Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758
Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer
NASA Astrophysics Data System (ADS)
Armstrong, Wilbur C.
1992-06-01
The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).
Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer
NASA Technical Reports Server (NTRS)
Armstrong, Wilbur C.
1992-01-01
The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).
Wiele, Stephen M.; Brasher, Anne M.D.; Miller, Matthew P.; May, Jason T.; Carpenter, Kurt D.
2012-01-01
The U.S. Geological Survey's National Water-Quality Assessment (NAWQA) Program was established by Congress in 1991 to collect long-term, nationally consistent information on the quality of the Nation's streams and groundwater. The NAWQA Program utilizes interdisciplinary and dynamic studies that link the chemical and physical conditions of streams (such as flow and habitat) with ecosystem health and the biologic condition of algae, aquatic invertebrates, and fish communities. This report presents metrics derived from NAWQA data and the U.S. Geological Survey streamgaging network for sampling sites in the Western United States, as well as associated chemical, habitat, and streamflow properties. The metrics characterize the conditions of algae, aquatic invertebrates, and fish. In addition, we have compiled climate records and basin characteristics related to the NAWQA sampling sites. The calculated metrics and compiled data can be used to analyze ecohydrologic trends over time.
NASA Technical Reports Server (NTRS)
Luz, P. L.; Rice, T.
1998-01-01
This technical memorandum reports on the mirror material properties that were compiled by NASA Marshall Space Flight Center (MSFC) from April 1996 to June 1997 for preliminary design of the Next Generation Space Telescope (NGST) Study. The NGST study began in February 1996, when the Program Development Directorate at NASA MSFC studied the feasibility of the NGST and developed the pre-phase A program for it. After finishing some initial studies and concepts development work on the NGST, MFSC's Program Development Directorate handed this work to the Observatory Projects Office at MSFC and then to NASA Goddard Space Flight Center (GSFC). This technical memorandum was written by MSFC's Preliminary Design Office and Materials and Processes Laboratory for the NGST Optical Telescope Assembly (OTA) team, in Support of NASA GSFC. It contains material properties for 9 mirror Substrate materials, using information from at least 6 industrial Suppliers, 16 textbooks, 44 technical papers, and 130 technical abstracts.
Pick_sw: a program for interactive picking of S-wave data, version 2.00
Ellefsen, Karl J.
2002-01-01
Program pick_sw is used to interactively pick travel times from S-wave data. It is assumed that the data are collected using 2 shots of opposite polarity at each shot location. The traces must be in either the SEG-2 format or the SU format. The program is written in the IDL and C programming languages, and the program is executed under the Windows operating system. (The program may also execute under other operating systems like UNIX if the C language functions are re-compiled).
Transportation technology and methodology reports
DOT National Transportation Integrated Search
1999-12-22
This Internet site sponsored by the Office of Highway Policy Information provides links to a compilation of PDF reports on transportation technology and methodology. Reports include "FHWA Statistical Programs;" "Nonresponse in Household Travel Survey...
Schedulers with load-store queue awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.
2017-02-07
In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.
Schedulers with load-store queue awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tong; Eichenberger, Alexandre E.; Jacob, Arpith C.
2017-01-24
In one embodiment, a computer-implemented method includes tracking a size of a load-store queue (LSQ) during compile time of a program. The size of the LSQ is time-varying and indicates how many memory access instructions of the program are on the LSQ. The method further includes scheduling, by a computer processor, a plurality of memory access instructions of the program based on the size of the LSQ.
United States Air Force Statistical Digest, Fiscal Year 1960. Fifteenth Edition
1960-09-30
USAF CIVILIAN EMPLOYEES IN SALARIED AND WAGE BOARD GROUPS EMPLOYED UNDER MILITARY , ASSISTANCE PROGRAM (MAP), AT END OF QUARTER - FY (Previous year...provide summary data on all aspects of the Mlli_ 165 tary Assistance program administered by the Air Force. The data were compiled from progress reports...Military Assistance . MAP AIRCRAFT - Aircraft in foreign countries provided by the USAF under Military Assistance Program . AIRCRAFT ATTRITION - Aircraft
Architecture Adaptive Computing Environment
NASA Technical Reports Server (NTRS)
Dorband, John E.
2006-01-01
Architecture Adaptive Computing Environment (aCe) is a software system that includes a language, compiler, and run-time library for parallel computing. aCe was developed to enable programmers to write programs, more easily than was previously possible, for a variety of parallel computing architectures. Heretofore, it has been perceived to be difficult to write parallel programs for parallel computers and more difficult to port the programs to different parallel computing architectures. In contrast, aCe is supportable on all high-performance computing architectures. Currently, it is supported on LINUX clusters. aCe uses parallel programming constructs that facilitate writing of parallel programs. Such constructs were used in single-instruction/multiple-data (SIMD) programming languages of the 1980s, including Parallel Pascal, Parallel Forth, C*, *LISP, and MasPar MPL. In aCe, these constructs are extended and implemented for both SIMD and multiple- instruction/multiple-data (MIMD) architectures. Two new constructs incorporated in aCe are those of (1) scalar and virtual variables and (2) pre-computed paths. The scalar-and-virtual-variables construct increases flexibility in optimizing memory utilization in various architectures. The pre-computed-paths construct enables the compiler to pre-compute part of a communication operation once, rather than computing it every time the communication operation is performed.
Research on computer systems benchmarking
NASA Technical Reports Server (NTRS)
Smith, Alan Jay (Principal Investigator)
1996-01-01
This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.
Compiler-Directed File Layout Optimization for Hierarchical Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut
File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less
Compiler-Directed File Layout Optimization for Hierarchical Storage Systems
Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...
2013-01-01
File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less
The Science on Saturday Program at Princeton Plasma Physics Laboratory
NASA Astrophysics Data System (ADS)
Bretz, N.; Lamarche, P.; Lagin, L.; Ritter, C.; Carroll, D. L.
1996-11-01
The Science on Saturday Program at Princeton Plasma Physics Laboratory consists of a series of Saturday morning lectures on various topics in science by scientists, engineers, educators, and others with an interesting story. This program has been in existence for over twelve years and has been advertised to and primarily aimed at the high school level. Topics ranging from superconductivity to computer animation and gorilla conservation to pharmaceutical design have been covered. Lecturers from the staff of Princeton, Rutgers, AT and T, Bristol Meyers Squibb, and many others have participated. Speakers have ranged from Nobel prize winners, astronauts, industrialists, educators, engineers, and science writers. Typically, there are eight to ten lectures starting in January. A mailing list has been compiled for schools, science teachers, libraries, and museums in the Princeton area. For the past two years AT and T has sponsored buses for Trenton area students to come to these lectures and an effort has been made to publicize the program to these students. The series has been very popular, frequently overfilling the 300 seat PPPL auditorium. As a result, the lectures are videotaped and broadcast to a large screen TV for remote viewing. Lecturers are encouraged to interact with the audience and ample time is provided for questions.
[Activities of voivodeship occupational medicine centers in workplace health promotion in 2008].
Goszczyńska, Eliza
2010-01-01
The paper aims to present the activities of the largest Voivodeship Occupational Medicine Centers (VOMCs) in Poland in the area of workplace health promotion in 2008. It was compiled on the basis of written reports concerning these activities sent by the Centers to the Polish National Center for Workplace Health Promotion, Nofer Institute of Occupational Medicine, Łódź. Their analysis shows a greatly varied level of engagement in and understanding of health promotion--from simple single actions (in the field of health education and screening) to long-running programs, including various ways of influencing people the programs are addressed to. In 2008, there were 78 such programs in the country, the most popular of them were those focused on occupational voice disorders and tobacco smoke). VOMCs perceive external factors, unfavorable or indifferent attitudes towards promoting health of their employees on the part of employers as well as financial constraints, as the most common obstacles in undertaking activities in the field of workplace health promotion. At the same time, they link achievements in this field mostly with their own activities, including effective cooperation with various partners and their well qualified and experienced employees.
Read buffer optimizations to support compiler-assisted multiple instruction retry
NASA Technical Reports Server (NTRS)
Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.
1993-01-01
Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.
Compiling knowledge-based systems from KEE to Ada
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Bock, Conrad; Feldman, Roy
1990-01-01
The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.
Preparing, Submitting, and Tracking a Grant Application
Information compiled by NCI's Epidemiology and Genomics Research Program to help investigators learn more about NIH and NCI information and policies related to writing and submitting new, resubmission, late, and renewal grant applications.
Small Airplane Certification Compliance Program
DOT National Transportation Integrated Search
1997-01-02
This advisory circular (AC) provides a compilation of historically acceptable means of compliance to specifically selected sections of Part 23 of the Federal Aviation Regulations that have become burdensome for small low performance airplanes to show...
2006 Oregon traffic crash summary
DOT National Transportation Integrated Search
2007-06-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring : on city streets, county roads and state highways. The data supports various local, county and state traffic : safety programs, engineering and ...
2007 Oregon traffic crash summary
DOT National Transportation Integrated Search
2008-07-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring : on city streets, county roads and state highways. The data supports various local, county and state traffic : safety programs, engineering and ...
New Chemicals Program under TSCA Chemical Categories Document
The categories included in this compilation represent chemicals for which sufficient assessment experience has been accumulated so that hazard concerns and testing recommendations vary little from chemical to chemical within the category.
36 CFR § 705.8 - Agreements modifying the terms of this part.
Code of Federal Regulations, 2013 CFR
2013-07-01
... REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN... phonorecords of transmission programs of regularly scheduled newscasts or on-the-spot coverage of news events...
36 CFR 705.8 - Agreements modifying the terms of this part.
Code of Federal Regulations, 2014 CFR
2014-07-01
... REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN... phonorecords of transmission programs of regularly scheduled newscasts or on-the-spot coverage of news events...
36 CFR 705.8 - Agreements modifying the terms of this part.
Code of Federal Regulations, 2012 CFR
2012-07-01
... REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN... phonorecords of transmission programs of regularly scheduled newscasts or on-the-spot coverage of news events...
36 CFR 705.8 - Agreements modifying the terms of this part.
Code of Federal Regulations, 2010 CFR
2010-07-01
... REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN... phonorecords of transmission programs of regularly scheduled newscasts or on-the-spot coverage of news events...
36 CFR 705.8 - Agreements modifying the terms of this part.
Code of Federal Regulations, 2011 CFR
2011-07-01
... REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN... phonorecords of transmission programs of regularly scheduled newscasts or on-the-spot coverage of news events...
2005 Oregon traffic crash summary
DOT National Transportation Integrated Search
2006-06-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring : on city streets, county roads and state highways. The data supports various local, county and state traffic : safety programs, engineering and ...
NASA Astrophysics Data System (ADS)
Niwa, M.; Alves, N. C.; Caetano, A. O.; Andrade, N. S. O.
2012-01-01
The recent advent of the commercial launch and re- entry activities, for promoting the expansion of human access to space for tourism and hypersonic travel, in the already complex ambience of the global space activities, brought additional difficulties over the development of a harmonized framework of international safety rules. In the present work, with the purpose of providing some complementary elements for global safety rule development, the certification-related activities conducted in the Brazilian space program are depicted and discussed, focusing mainly on the criterion for certification basis compilation. The results suggest that the composition of a certification basis with the preferential use of internationally-recognized standards, as is the case of ISO standards, can be a first step toward the development of an international safety regulation for commercial space activities.
Approaching mathematical model of the immune network based DNA Strand Displacement system.
Mardian, Rizki; Sekiyama, Kosuke; Fukuda, Toshio
2013-12-01
One biggest obstacle in molecular programming is that there is still no direct method to compile any existed mathematical model into biochemical reaction in order to solve a computational problem. In this paper, the implementation of DNA Strand Displacement system based on nature-inspired computation is observed. By using the Immune Network Theory and Chemical Reaction Network, the compilation of DNA-based operation is defined and the formulation of its mathematical model is derived. Furthermore, the implementation on this system is compared with the conventional implementation by using silicon-based programming. From the obtained results, we can see a positive correlation between both. One possible application from this DNA-based model is for a decision making scheme of intelligent computer or molecular robot. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The numerical solution of ordinary differential equations by the Taylor series method
NASA Technical Reports Server (NTRS)
Silver, A. H.; Sullivan, E.
1973-01-01
A programming implementation of the Taylor series method is presented for solving ordinary differential equations. The compiler is written in PL/1, and the target language is FORTRAN IV. The reduction of a differential system to rational form is described along with the procedures required for automatic numerical integration. The Taylor method is compared with two other methods for a number of differential equations. Algorithms using the Taylor method to find the zeroes of a given differential equation and to evaluate partial derivatives are presented. An annotated listing of the PL/1 program which performs the reduction and code generation is given. Listings of the FORTRAN routines used by the Taylor series method are included along with a compilation of all the recurrence formulas used to generate the Taylor coefficients for non-rational functions.
Optimization technique of wavefront coding system based on ZEMAX externally compiled programs
NASA Astrophysics Data System (ADS)
Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua
2016-10-01
Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.
Evaluation of the FIR Example using Xilinx Vivado High-Level Synthesis Compiler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zheming; Finkel, Hal; Yoshii, Kazutomo
Compared to central processing units (CPUs) and graphics processing units (GPUs), field programmable gate arrays (FPGAs) have major advantages in reconfigurability and performance achieved per watt. This development flow has been augmented with high-level synthesis (HLS) flow that can convert programs written in a high-level programming language to Hardware Description Language (HDL). Using high-level programming languages such as C, C++, and OpenCL for FPGA-based development could allow software developers, who have little FPGA knowledge, to take advantage of the FPGA-based application acceleration. This improves developer productivity and makes the FPGA-based acceleration accessible to hardware and software developers. Xilinx Vivado HLSmore » compiler is a high-level synthesis tool that enables C, C++ and System C specification to be directly targeted into Xilinx FPGAs without the need to create RTL manually. The white paper [1] published recently by Xilinx uses a finite impulse response (FIR) example to demonstrate the variable-precision features in the Vivado HLS compiler and the resource and power benefits of converting floating point to fixed point for a design. To get a better understanding of variable-precision features in terms of resource usage and performance, this report presents the experimental results of evaluating the FIR example using Vivado HLS 2017.1 and a Kintex Ultrascale FPGA. In addition, we evaluated the half-precision floating-point data type against the double-precision and single-precision data type and present the detailed results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiZio, S.M.
Various state regulatory agencies have expressed a need for networking with information gatherers/researchers to produce a concise compilation of primary information so that the basis for regulatory standards can be scientifically referenced. California has instituted several programs to retrieve primary information, generate primary information through research, and generate unique regulatory standards by integrating the primary literature and the products of research. This paper describes these programs.
ERIC Educational Resources Information Center
Rosser, Stephen R.; Denton, Jon J.
The development and documentation of procedures to conduct a comprehensive follow-up survey in addition to the compilation of the perceptions of recent graduates regarding the quality of their preparation for teaching were the goals of this investigation. The sample consisted of 196 1973-74 graduates of teacher preparation programs under the aegis…
Compile-Time Schedulability Analysis of Communicating Concurrent Programs
2006-06-28
synchronize via the read and write operations on the FIFO channels. These operations have been implemented with the help of semaphores , which...3 1.1.2 Synchronous Dataflow . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.3 Boolean Dataflow...described by concurrent programs . . . . . . . . . 4 1.3 A synchronous dataflow model, its topology matrix, and repetition vector . 10 1.4 Select and
ERIC Educational Resources Information Center
Texas A and M Univ., College Station. Sea Grant Coll. Program.
This bibliography features a compilation of textbooks, curricular materials, and other marine education resource materials developed by individual Sea Grant programs throughout the Unites States. The listing is intended to be used as a tool for teachers and other individuals interested in helping students explore and understand our oceans and…
ERIC Educational Resources Information Center
Winandy, Donald H.; Marsh, Robert
This is a comprehensive, up-dated directory of professional personnel of state higher education governing or coordinating agencies and state commissions for the administration of certain federal programs relating to higher education. The directory, compiled from questionnaires, originated from a need expressed by many persons in the field for…
ERIC Educational Resources Information Center
Zimmerman, Enid, Ed.
This book is a compilation of year-long thematic curriculum units developed and taught by teachers participating in the third Indiana University Artistically Talented Program (ATP). Units for artistically gifted and talented students, grade 4-12, are developed along guidelines which require that they: focus on complex ideas; use themes as…
VTAE Facts, February 1994. Wisconsin Board of Vocational, Technical, and Adult Education.
ERIC Educational Resources Information Center
Wisconsin State Board of Vocational, Technical, and Adult Education, Madison.
Compiled by the Wisconsin Board of Vocational, Technical, and Adult Education (VTAE), this fact book presents information on enrollments, financing, programs, and staffing in the state's VTAE programs from 1983 to 1993. Enrollment data is presented in three tables: 1983-93 VTAE headcount enrollment by aid category; 1983-93 full-time equivalent…
ERIC Educational Resources Information Center
Nelson, Kevin
This publication highlights national and regional foundations that are most likely to fund colleges and universities to perform activities similar to those undertaken by the Office of University Partnerships' Community Outreach Partnership Center Program (COPC) of the U.S. Department of Housing and Urban Development. The COPC Program provides…
Certain Characteristics of iSchools Compared to Other LIS Programs
ERIC Educational Resources Information Center
Wedgeworth, Robert
2013-01-01
This dissertation compares 17 iSchools and 36 other LIS schools that offer the ALA-accredited Master's degree program according to certain characteristics. The study compiles quantitative and qualitative data on 32 variables and sub-variables drawn from the schools' web sites, ALISE 2010 Statistical Report, and Elsevier's SCOPUS…
Emetic and Electric Shock Alcohol Aversion Therapy: Six- and Twelve-Month Follow-Up.
ERIC Educational Resources Information Center
Cannon, Dale S.; Baker, Timothy B.
1981-01-01
Follow-up data are presented for 6- and 12-months on male alcoholics (N=20) who received either a multifaceted inpatient alcoholism treatment program alone (controls) or emetic or shock aversion therapy in addition to that program. Both emetic and control subjects compiled more days of abstinence than shock subjects. (Author)
ERIC Educational Resources Information Center
Maryland State Dept. of Education, Baltimore.
The 1973 statewide (Maryland) educational accountability plan, for which this report was compiled, called for the development and establishment of statewide and local goals in reading, writing, and mathematics; a comprehensive and uniform statewide testing program; procedures for collecting data on student, home, community, and school…
ERIC Educational Resources Information Center
Seppanen, Loretta
Each year, the Washington State Board for Community and Technical Colleges (SBCTC) compiles data on educational and job related outcomes for graduates of vocational preparation programs. The automated data matching procedure examines state unemployment insurance and benefits records, public post-secondary enrollments, U.S. Armed Forces…
A comparison of two rough mill cutting models
Steven Ruddell; Henry Huber; Powsiri Klinkhachorn
1990-01-01
A comparison of lumber yield using the Automated Lumber Processing System (ALPS) Cutting Program and the Optimal Furniture Cutting Program (OFCP) was conducted on eight cutting bills. No.1 Common grade hard maple data files were compiled using a board database collected and used by the USDA Forest Service's Forest Products Laboratory to develop standard hardwood...
ERIC Educational Resources Information Center
Smith, Jack E.
An educational needs assessment of the seven-community service area of Los Angeles Harbor College (California) was conducted to identify sources of demographic information, and to analyze and compile this information to provide a resource for both college and community committees in drafting plans for an expanded outreach program. A four-point…
Designing Day Care: A Resource Manual for Development of Child Care Services.
ERIC Educational Resources Information Center
Jones, Jacquelyn O.
Compiled to promote the development of high quality, affordable, and accessible day care programs in West Tennessee, this manual helps prospective child caregivers decide which kind of day care to operate and describes start-up steps and program operation. Section 1 focuses on five basic questions of potential caregivers: (1) Which type of child…
Manufacturing Methods and Technology (MMT) project execution report
NASA Astrophysics Data System (ADS)
Swim, P. A.
1982-10-01
This document is a summary compilation of the manufacturing methods and technology program project status reports (RCS DRCMT-301) submitted to IBEA from DARCOM major Army subcommands and project managers. Each page of the computerized section lists project number, title, status, funding, and projected completion date. Summary pages give information relating to the overall DARCOM program.
Digest of Adult Education Statistics--1998.
ERIC Educational Resources Information Center
Elliott, Barbara G.
Information on literacy programs for adults in the United States was compiled from the annual statistical performance reports states submit to the U.S. Department of Education at the end of each program year (PY). Nearly 27 percent of adults had not completed a high school diploma or equivalent. In PY 1991, the nation's adult education (AE)…
ERIC Educational Resources Information Center
National Education Association, Washington, DC. Center for Human Relations.
This publication is a compilation of speeches, seminar summaries, and participant reactions and recommendations from the Ninth Annual NEA-CHR Conference printed in both English and Spanish. The conference was designed to present the concept of cultural pluralism and to suggest ways of implementing this concept in instructional programs. The…
Research reports: 1985 NASA/ASEE Summer Faculty Fellowship Program
NASA Technical Reports Server (NTRS)
Karr, G. R. (Editor); Osborn, T. L. (Editor); Dozier, J. B. (Editor); Freeman, L. M. (Editor)
1986-01-01
A compilation of 40 technical reports on research conducted by participants in the 1985 NASA/ASEE Summer Faculty Fellowship Program at Marshall Space Flight Center (MSFC) is given. Weibull density functions, reliability analysis, directional solidification, space stations, jet stream, fracture mechanics, composite materials, orbital maneuvering vehicles, stellar winds and gamma ray bursts are among the topics discussed.
A research program to reduce the interior noise in general aviation aircraft, index and summary
NASA Technical Reports Server (NTRS)
Morgan, L.; Jackson, K.; Roskam, J.
1985-01-01
This report is an index of the published works from NASA Grant NSG 1301, entitled A Research Program to Reduce the Interior Noise in General Aviation Aircraft. Included are a list of all published reports and papers, a compilation of test specimen characteristics, and summaries of each published work.
ERIC Educational Resources Information Center
Nichols, Joe D.; Soe, Kyaw
2013-01-01
This qualitative examination of preservice teachers' experiences as they volunteered for a literacy program for immigrant students was compiled over the 2010-2011 academic year. The data sources for this project consisted of 90 written journal reflections analyzed by both researchers to develop thematic categories of the participants' comments and…
ERIC Educational Resources Information Center
Tao, Fumiyo; And Others
This volume contains technical and supporting materials that supplement Volume I, which describes upward mobility programs for disadvantaged and dislocated workers in the service sector. Appendix A is a detailed description of the project methodology, including data collection methods and information on data compilation, processing, and analysis.…
Research flight software engineering and MUST, an integrated system of support tools
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Foudriat, E. C.; Will, R. W.
1977-01-01
Consideration is given to software development to support NASA flight research. The Multipurpose User-Oriented Software Technology (MUST) program, designed to integrate digital systems into flight research, is discussed. Particular attention is given to the program's special interactive user interface, subroutine library, assemblers, compiler, automatic documentation tools, and test and simulation subsystems.
Creating the Action Model for High Risk Infant Follow Up Program in Iran
Heidarzadeh, Mohammad; Jodiery, Behzad; Mirnia, Kayvan; Akrami, Forouzan; Hosseini, Mohammad Bagher; Heidarabadi, Seifollah; HabibeLahi, Abbas
2013-01-01
Abstract Background Intervention in early childhood development as one of the social determinants of health, is important for reducing social gap and inequity. In spite of increasingly developing intensive neonatal care wards and decreasing neonatal mortality rate, there is no follow up program in Iran. This study was carreid out to design high risk infants follow up care program with the practical aim of creating an model action for whole country, in 2012. Methods This qualitative study has been done by the Neonatal Department of the Deputy of Public Health in cooperation with Pediatrics Health Research Center of Tabriz University of Medical Sciences, Iran. After study of international documents, consensus agreement about adapted program for Iran has been accomplished by focus group discussion and attended Delphi agreement technique. After compiling primary draft included evidence based guidelines and executive plan, 14 sessions including expert panels were hold to finalize the program. Results After finalizing the program, high risk infants follow up care service package has been designed in 3 chapters: Evidence based clinical guidelines; eighteen main clinical guidelines and thirteen subsidiaries clinical guidelines, executive plan; 6 general, 6 following up and 5 backup processes. Education program including general and especial courses for care givers and follow up team, and family education processes. Conclusion We designed and finalized high risk infants follow up care service package. It seems to open a way to extend it to whole country. PMID:26171344
Domain Specific Language Support for Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellor-Crummey, John
A multi-institutional project known as D-TEC (short for “Domain- specific Technology for Exascale Computing”) set out to explore technologies to support the construction of Domain Specific Languages (DSLs) to map application programs to exascale architectures. DSLs employ automated code transformation to shift the burden of delivering portable performance from application programmers to compilers. Two chief properties contribute: DSLs permit expression at a high level of abstraction so that a programmer’s intent is clear to a compiler and DSL implementations encapsulate human domain-specific optimization knowledge so that a compiler can be smart enough to achieve good results on specific hardware. Domainmore » specificity is what makes these properties possible in a programming language. If leveraging domain specificity is the key to keep exascale software tractable, a corollary is that many different DSLs will be needed to encompass the full range of exascale computing applications; moreover, a single application may well need to use several different DSLs in conjunction. As a result, developing a general toolkit for building domain-specific languages was a key goal for the D-TEC project. Different aspects of the D-TEC research portfolio were the focus of work at each of the partner institutions in the multi-institutional project. D-TEC research and development work at Rice University focused on on three principal topics: understanding how to automate the tuning of code for complex architectures, research and development of the Rosebud DSL engine, and compiler technology to support complex execution platforms. This report provides a summary of the research and development work on the D-TEC project at Rice University.« less
A new version of the CADNA library for estimating round-off error propagation in Fortran programs
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie; Lamotte, Jean-Luc
2010-11-01
The CADNA library enables one to estimate, using a probabilistic approach, round-off error propagation in any simulation program. CADNA provides new numerical types, the so-called stochastic types, on which round-off errors can be estimated. Furthermore CADNA contains the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. On 64-bit processors, depending on the rounding mode chosen, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs. Therefore the CADNA library has been improved to enable the numerical validation of programs on 64-bit processors. New version program summaryProgram title: CADNA Catalogue identifier: AEAT_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 28 488 No. of bytes in distributed program, including test data, etc.: 463 778 Distribution format: tar.gz Programming language: Fortran NOTE: A C++ version of this program is available in the Library as AEGQ_v1_0 Computer: PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system: LINUX, UNIX Classification: 6.5 Catalogue identifier of previous version: AEAT_v1_0 Journal reference of previous version: Comput. Phys. Commun. 178 (2008) 933 Does the new version supersede the previous version?: Yes Nature of problem: A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method: The CADNA library [1-3] implements Discrete Stochastic Arithmetic [4,5] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Reasons for new version: On 64-bit processors, the mathematical library associated with the GNU Fortran compiler may provide incorrect results or generate severe bugs with rounding towards -∞ and +∞, which the random rounding mode is based on. Therefore a particular definition of mathematical functions for stochastic arguments has been included in the CADNA library to enable its use with the GNU Fortran compiler on 64-bit processors. Summary of revisions: If CADNA is used on a 64-bit processor with the GNU Fortran compiler, mathematical functions are computed with rounding to the nearest, otherwise they are computed with the random rounding mode. It must be pointed out that the knowledge of the accuracy of the stochastic argument of a mathematical function is never lost. Restrictions: CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Additional comments: In the library archive, users are advised to read the INSTALL file first. The doc directory contains a user guide named ug.cadna.pdf which shows how to control the numerical accuracy of a program using CADNA, provides installation instructions and describes test runs. The source code, which is located in the src directory, consists of one assembly language file (cadna_rounding.s) and eighteen Fortran language files. cadna_rounding.s is a symbolic link to the assembly file corresponding to the processor and the Fortran compiler used. This assembly file contains routines which are frequently called in the CADNA Fortran files to change the rounding mode. The Fortran language files contain the definition of the stochastic types on which the control of accuracy can be performed, CADNA specific functions (for instance to enable or disable the detection of numerical instabilities), the definition of arithmetic and relational operators which are overloaded for stochastic variables and the definition of mathematical functions which can be used with stochastic arguments. The examples directory contains seven test runs which illustrate the use of the CADNA library and the benefits of Discrete Stochastic Arithmetic. Running time: The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected.
Mechanics of Textile Composites Conference
NASA Technical Reports Server (NTRS)
Poe, Clarence C. (Editor); Harris, Charles E. (Editor)
1995-01-01
This document is a compilation of papers presented at the Mechanics of Textile Composites Conference in Hampton, Virginia, December 6-8, 1994. This conference was the culmination of a 3-year program that was initiated by NASA late in 1990 to develop mechanics of textile composites in support of the NASA Advance Composites Technology Program (ACT). The goal of the program was to develop mathematical models of textile preform materials and test methods to facilitate structural analysis and design. Participants in the program were from NASA, academia, and industry.
NASA Astrophysics Data System (ADS)
Kondayya, Gundra; Shukla, Alok
2012-03-01
Pariser-Parr-Pople (P-P-P) model Hamiltonian is employed frequently to study the electronic structure and optical properties of π-conjugated systems. In this paper we describe a Fortran 90 computer program which uses the P-P-P model Hamiltonian to solve the Hartree-Fock (HF) equation for infinitely long, one-dimensional, periodic, π-electron systems. The code is capable of computing the band structure, as also the linear optical absorption spectrum, by using the tight-binding and the HF methods. Furthermore, using our program the user can solve the HF equation in the presence of a finite external electric field, thereby, allowing the simulation of gated systems. We apply our code to compute various properties of polymers such as trans-polyacetylene, poly- para-phenylene, and armchair and zigzag graphene nanoribbons, in the infinite length limit. Program summaryProgram title: ppp_bulk.x Catalogue identifier: AEKW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 464 No. of bytes in distributed program, including test data, etc.: 2 046 933 Distribution format: tar.gz Programming language: Fortran 90 Computer: PCs and workstations Operating system: Linux, Code was developed and tested on various recent versions of 64-bit Fedora including Fedora 14 (kernel version 2.6.35.12-90). Classification: 7.3 External routines: This program needs to link with LAPACK/BLAS libraries compiled with the same compiler as the program. For the Intel Fortran Compiler we used the ACML library version 4.4.0, while for the gfortran compiler we used the libraries supplied with the Fedora distribution. Nature of problem: The electronic structure of one-dimensional periodic π-conjugated systems is an intense area of research at present because of the tremendous interest in the physics of conjugated polymers and graphene nanoribbons. The computer program described in this paper provides an efficient way of solving the Hartree-Fock equations for such systems within the P-P-P model. In addition to the Bloch orbitals, band structure, and the density of states, the program can also compute quantities such as the linear absorption spectrum, and the electro-absorption spectrum of these systems. Solution method: For a one-dimensional periodic π-conjugated system lying in the xy-plane, the single-particle Bloch orbitals are expressed as linear combinations of p-orbitals of individual atoms. Then using various parameters defining the P-P-P Hamiltonian, the Hartree-Fock equations are set up as a matrix eigenvalue problem in the k-space. Thereby, its solutions are obtained in a self-consistent manner, using the iterative diagonalizing technique at several k points. The band structure and the corresponding Bloch orbitals thus obtained are used to perform a variety of calculations such as the density of states, linear optical absorption spectrum, electro-absorption spectrum, etc. Running time: Most of the examples provided take only a few seconds to run. For a large system, however, depending on the system size, the run time may be a few minutes to a few hours.
An IBM 370 assembly language program verifier
NASA Technical Reports Server (NTRS)
Maurer, W. D.
1977-01-01
The paper describes a program written in SNOBOL which verifies the correctness of programs written in assembly language for the IBM 360 and 370 series of computers. The motivation for using assembly language as a source language for a program verifier was the realization that many errors in programs are caused by misunderstanding or ignorance of the characteristics of specific computers. The proof of correctness of a program written in assembly language must take these characteristics into account. The program has been compiled and is currently running at the Center for Academic and Administrative Computing of The George Washington University.
Development a computer codes to couple PWR-GALE output and PC-CREAM input
NASA Astrophysics Data System (ADS)
Kuntjoro, S.; Budi Setiawan, M.; Nursinta Adi, W.; Deswandri; Sunaryo, G. R.
2018-02-01
Radionuclide dispersion analysis is part of an important reactor safety analysis. From the analysis it can be obtained the amount of doses received by radiation workers and communities around nuclear reactor. The radionuclide dispersion analysis under normal operating conditions is carried out using the PC-CREAM code, and it requires input data such as source term and population distribution. Input data is derived from the output of another program that is PWR-GALE and written Population Distribution data in certain format. Compiling inputs for PC-CREAM programs manually requires high accuracy, as it involves large amounts of data in certain formats and often errors in compiling inputs manually. To minimize errors in input generation, than it is make coupling program for PWR-GALE and PC-CREAM programs and a program for writing population distribution according to the PC-CREAM input format. This work was conducted to create the coupling programming between PWR-GALE output and PC-CREAM input and programming to written population data in the required formats. Programming is done by using Python programming language which has advantages of multiplatform, object-oriented and interactive. The result of this work is software for coupling data of source term and written population distribution data. So that input to PC-CREAM program can be done easily and avoid formatting errors. Programming sourceterm coupling program PWR-GALE and PC-CREAM is completed, so that the creation of PC-CREAM inputs in souceterm and distribution data can be done easily and according to the desired format.
Balcázar, Héctor; Alvarado, Matilde; Hollen, Mary Luna; Gonzalez-Cruz, Yanira; Pedregón, Verónica
2005-07-01
In 2001, the National Heart, Lung, and Blood Institute partnered with the National Council of La Raza to conduct a pilot test of its community-based outreach program Salud Para Su Corazón (Health for Your Heart), which aims to reduce the burden of morbidity and mortality associated with cardiovascular disease among Latinos. The effectiveness of promotores de salud (community health workers) in improving heart-healthy behaviors among Latino families participating in the pilot program at seven sites was evaluated. Data on the characteristics of the promotores in the Salud Para Su Corazón program were compiled. Promotores collected data on family risk factors, health habits, referrals and screenings, information sharing, and program satisfaction from 223 participating Latino families (320 individual family members) through questionnaires. Paired t tests and chi-square tests were used to measure pretest-posttest differences among program participants. Results demonstrated the effectiveness of the promotora model in improving heart-healthy behaviors, promoting community referrals and screenings, enhancing information sharing beyond families, and satisfying participants' expectations of the program. The main outcome of interest was the change in heart-healthy behaviors among families. The community outreach model worked well in the seven pilot programs because of the successes of the promotores and the support of the community-based organizations. Successes stemmed in part from the train-the-trainer approach. Promotoria, as implemented in this program, has the potential to be integrated with a medical model of patient care for primary, secondary, and tertiary prevention.
KWOC (Key-Word-Out-of-Context) Index of US Nuclear Regulatory Commission Regulatory Guide Series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, S.D.
1990-04-01
To meet the objectives of the program funded by the Department of Energy (DOE)-Nuclear Energy (NE) Technology Support Programs, the Performance Assurance Project Office (PAPO) administers a Performance Assurance Information Program that collects, compiles, and distributes program-related information, reports, and publications for the benefit of the DOE-NE program participants. THE KWOC Index of US Nuclear Regulatory Commission Regulatory Guide Series'' is prepared as an aid in searching for specific topics in the US Nuclear Regulatory Commission, Regulatory Guide Series.
Mount, D W; Conrad, B
1986-01-01
We have previously described programs for a variety of types of sequence analysis (1-4). These programs have now been integrated into a single package. They are written in the standard C programming language and run on virtually any computer system with a C compiler, such as the IBM/PC and other computers running under the MS/DOS and UNIX operating systems. The programs are widely distributed and may be obtained from the authors as described below. PMID:3753780
NASA Technical Reports Server (NTRS)
Vaughn, Charles R.
1993-01-01
This Technical Memorandum is a user's manual with additional program documentation for the computer program PREROWS2.EXE. PREROWS2 works with data collected by an ocean wave spectrometer that uses radar (ROWS) as an active remote sensor. The original ROWS data acquisition subsystem was replaced with a PC in 1990. PREROWS2.EXE is a compiled QuickBasic 4.5 program that unpacks the recorded data, displays various variables, and provides for copying blocks of data from the original 8mm tape to a PC file.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kartsaklis, Christos; Hernandez, Oscar R
Interrogating the structure of a program for patterns of interest is attractive to the broader spectrum of software engineering. The very approach by which a pattern is constructed remains a concern for the source code mining community. This paper presents a pattern programming model, for the C and Fortran programming languages, using a compiler directives approach. We discuss our specification, called HERCULES/PL, throughout a number of examples and show how different patterns can be constructed, plus some preliminary results.
Electron/proton spectrometer certification documentation analyses
NASA Technical Reports Server (NTRS)
Gleeson, P.
1972-01-01
A compilation of analyses generated during the development of the electron-proton spectrometer for the Skylab program is presented. The data documents the analyses required by the electron-proton spectrometer verification plan. The verification plan was generated to satisfy the ancillary hardware requirements of the Apollo Applications program. The certification of the spectrometer requires that various tests, inspections, and analyses be documented, approved, and accepted by reliability and quality control personnel of the spectrometer development program.
A communication link between the GIM data base and a general application program
NASA Technical Reports Server (NTRS)
Argo, W. V.
1972-01-01
Utilizing the extract verb of GIM causes the requested information to be extracted from the GIM base and written on to tape. When the GIM extract has completed, a FORTRAN program is then compiled and executed. This program reads the tape generated by GIM, formats and prints the extracted data on the line printer. When an end of file on the extracted tape is encountered the job is terminated.
DOT National Transportation Integrated Search
2009-08-01
This is a guide compiled by the staff of the Iowa Department of Transportation to help local governments, organizations and individuals with preliminary searches for funding assistance from the DOT. Programs that fit more than one grouping are listed...
2003 Oregon traffic crash summary
DOT National Transportation Integrated Search
2004-10-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring on city streets, county roads and state highways. The data supports various local, county and state traffic safety programs, engineering and plan...
Region 9 Tribal Grant Program - Project Officer and Tribal Contact Information Map Service
This compilation of geospatial data is for the purpose of managing and communicating information about current EPA project officers, tribal contacts, and tribal grants, both internally and with external stakeholders.
Guidelines for preparation of state water-use estimates for 2000
Kenny, Joan F.
2004-01-01
This report describes the water-use categories and data elements required for the 2000 national water-use compilation conducted by the U.S. Geological Survey (USGS) as part of its National Water Use Information Program. It identifies sources of water-use information, guidelines for estimating water use, and required documentation for preparation of the national compilation by State for the United States, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. The data are published in USGS Circular 1268, Estimated Use of Water in the United States in 2000. USGS has published circulars on estimated use of water in the United States at 5-year intervals since 1950. As part of this USGS program to document water use on a national scale for the year 2000, all States prepare estimates of water withdrawals for public supply, industrial, irrigation, and thermoelectric power generation water uses at the county level. All States prepare estimates of domestifc use and population served by public supply at least at the State level. All States provide estimates of irrigated acres by irrigation system type (sprinkler, surface, or microirrigation) at the county level. County-level estimates of withdrawals for mining, livestock, and aquaculture uses are compiled by selected States that comprised the largest percentage of national use in 1995 for these categories, and are optional for other States. Ground-water withdrawals for public-supply, industrial, and irrigation use are aggregated by principal aquifer or aquifer system, as identified by the USGS Office of Ground Water. Some categories and data elements that were mandatory in previous compilations are optional for the 2000 compilation, in response to budget considerations at the State level. Optional categories are commercial, hydroelectric, and wastewater treatment. Estimation of deliveries from public supply to domestic, commercial, industrial, and thermoelectric uses, consumptive use for any category, and irrigation conveyance loss are optional data elements. Aggregation of data by the eight-digit hydrologic cataloging unit is optional. Water-use data compiled by the States are stored in the USGS Aggregated Water-Use Data System (AWUDS). This database is designed to store both mandatory and optional data elements. AWUDS contains several routines that can be used for quality assurance and quality control of the data, and also produces tables of water-use data compiled for 1985, 1990, 1995, and 2000. These water-use data are used by USGS, other agencies, organizations, academic institutions, and the public for research, water-management decisions, trend analysis, and forecasting.
Vu, Michelle; White, Annesha; Kelley, Virginia P; Hopper, Jennifer Kuca; Liu, Cathy
2016-07-01
The Affordable Care Act (ACA) healthcare reforms, centered on achieving the Centers for Medicare & Medicaid Services (CMS) Triple Aim goals of improving patient care quality and satisfaction, improving population health, and reducing costs, have led to increasing partnerships between hospitals and insurance companies and the implementation of employee wellness programs. Hospitals and insurance companies have opted to partner to distribute the risk and resources and increase coordination of care. To examine the ACA's impact on the health and wellness programs that have resulted from the joint ventures of hospitals and health plans based on the published literature. We conducted a review of the literature to identify successful mergers and best practices of health and wellness programs. Articles published between January 2007 and January 2015 were compiled from various search engines, using the search terms "corporate," "health and wellness program," "health plan," "insurance plan," "hospital," "joint venture," and "vertical merger." Publications that described consolidations or wellness programs not tied to health insurance plans were excluded. Noteworthy characteristics of these programs were summarized and tabulated. A total of 44 eligible articles were included in the analysis. The findings showed that despite rising healthcare costs, joint ventures prevent hospitals from trading-off quality and services for cost reductions. Administrators believed that partnering would allow the companies to meet ACA standards for improving clinical outcomes at reduced costs. Before the implementation of the ACA, some employers had wellness programs, but these were not standardized and did not need to produce measurable results. The ACA encouraged improvement of employee wellness programs by providing funding for expanded health services and by mandating quality care. Successful workplace health and wellness programs have varying components, but all include monetary incentives and documented outcomes. The concurrent growth of hospital health plans (especially those emerging from vertical mergers and partnerships) and wellness programs in the United States provides a unique opportunity for employees and patient populations to promote wellness and achieve the Triple Aim goals as initiated by CMS.
Vu, Michelle; White, Annesha; Kelley, Virginia P.; Hopper, Jennifer Kuca; Liu, Cathy
2016-01-01
Background The Affordable Care Act (ACA) healthcare reforms, centered on achieving the Centers for Medicare & Medicaid Services (CMS) Triple Aim goals of improving patient care quality and satisfaction, improving population health, and reducing costs, have led to increasing partnerships between hospitals and insurance companies and the implementation of employee wellness programs. Hospitals and insurance companies have opted to partner to distribute the risk and resources and increase coordination of care. Objective To examine the ACA's impact on the health and wellness programs that have resulted from the joint ventures of hospitals and health plans based on the published literature. Method We conducted a review of the literature to identify successful mergers and best practices of health and wellness programs. Articles published between January 2007 and January 2015 were compiled from various search engines, using the search terms “corporate,” “health and wellness program,” “health plan,” “insurance plan,” “hospital,” “joint venture,” and “vertical merger.” Publications that described consolidations or wellness programs not tied to health insurance plans were excluded. Noteworthy characteristics of these programs were summarized and tabulated. Results A total of 44 eligible articles were included in the analysis. The findings showed that despite rising healthcare costs, joint ventures prevent hospitals from trading-off quality and services for cost reductions. Administrators believed that partnering would allow the companies to meet ACA standards for improving clinical outcomes at reduced costs. Before the implementation of the ACA, some employers had wellness programs, but these were not standardized and did not need to produce measurable results. The ACA encouraged improvement of employee wellness programs by providing funding for expanded health services and by mandating quality care. Successful workplace health and wellness programs have varying components, but all include monetary incentives and documented outcomes. Conclusion The concurrent growth of hospital health plans (especially those emerging from vertical mergers and partnerships) and wellness programs in the United States provides a unique opportunity for employees and patient populations to promote wellness and achieve the Triple Aim goals as initiated by CMS. PMID:27625744
Swan: A tool for porting CUDA programs to OpenCL
NASA Astrophysics Data System (ADS)
Harvey, M. J.; De Fabritiis, G.
2011-04-01
The use of modern, high-performance graphical processing units (GPUs) for acceleration of scientific computation has been widely reported. The majority of this work has used the CUDA programming model supported exclusively by GPUs manufactured by NVIDIA. An industry standardisation effort has recently produced the OpenCL specification for GPU programming. This offers the benefits of hardware-independence and reduced dependence on proprietary tool-chains. Here we describe a source-to-source translation tool, "Swan" for facilitating the conversion of an existing CUDA code to use the OpenCL model, as a means to aid programmers experienced with CUDA in evaluating OpenCL and alternative hardware. While the performance of equivalent OpenCL and CUDA code on fixed hardware should be comparable, we find that a real-world CUDA application ported to OpenCL exhibits an overall 50% increase in runtime, a reduction in performance attributable to the immaturity of contemporary compilers. The ported application is shown to have platform independence, running on both NVIDIA and AMD GPUs without modification. We conclude that OpenCL is a viable platform for developing portable GPU applications but that the more mature CUDA tools continue to provide best performance. Program summaryProgram title: Swan Catalogue identifier: AEIH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License version 2 No. of lines in distributed program, including test data, etc.: 17 736 No. of bytes in distributed program, including test data, etc.: 131 177 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 256 Mbytes Classification: 6.5 External routines: NVIDIA CUDA, OpenCL Nature of problem: Graphical Processing Units (GPUs) from NVIDIA are preferentially programed with the proprietary CUDA programming toolkit. An alternative programming model promoted as an industry standard, OpenCL, provides similar capabilities to CUDA and is also supported on non-NVIDIA hardware (including multicore ×86 CPUs, AMD GPUs and IBM Cell processors). The adaptation of a program from CUDA to OpenCL is relatively straightforward but laborious. The Swan tool facilitates this conversion. Solution method:Swan performs a translation of CUDA kernel source code into an OpenCL equivalent. It also generates the C source code for entry point functions, simplifying kernel invocation from the host program. A concise host-side API abstracts the CUDA and OpenCL APIs. A program adapted to use Swan has no dependency on the CUDA compiler for the host-side program. The converted program may be built for either CUDA or OpenCL, with the selection made at compile time. Restrictions: No support for CUDA C++ features Running time: Nominal
1990-04-23
developed Ada Real - Time Operating System (ARTOS) for bare machine environments(Target), ACW 1.1I0. " ; - -M.UIECTTERMS Ada programming language, Ada...configuration) Operating System: CSC developed Ada Real - Time Operating System (ARTOS) for bare machine environments Memory Size: 4MB 2.2...Test Method Testing of the MC Ado V1.2.beta/ Concurrent Computer Corporation compiler and the CSC developed Ada Real - Time Operating System (ARTOS) for
Interface for the documentation and compilation of a library of computer models in physiology.
Summers, R. L.; Montani, J. P.
1994-01-01
A software interface for the documentation and compilation of a library of computer models in physiology was developed. The interface is an interactive program built within a word processing template in order to provide ease and flexibility of documentation. A model editor within the interface directs the model builder as to standardized requirements for incorporating models into the library and provides the user with an index to the levels of documentation. The interface and accompanying library are intended to facilitate model development, preservation and distribution and will be available for public use. PMID:7950046
Microgravity science and applications bibliography, 1984 revision
NASA Technical Reports Server (NTRS)
Pentecost, E.
1984-01-01
A compilation of Government reports, contractor reports, conference proceedings, and journal articles dealing is presented that deal with flight experiments utilizing a low gravity environment to elucidate and control various processes or with ground based activities that provide supporting research. Subdivisions include six major categories: (1) Electronic Materials; (2) Metals, Alloys, and Composites; (3) Fluid Dynamics and Transports; (4) Biotechnology; (5) Glasses and Ceramics; and (6) Combustion. Also included are publications from the European, Soviet, and Japanese MSA programs. In addition, there is a list of patents and appendices providing a compilation of anonymously authored reports and a cross reference index.
Preventing Run-Time Bugs at Compile-Time Using Advanced C++
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, Richard
When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.
1988-03-28
International Business Machines Corporation IBM Development System for the Ada Language, Version 2.1.0 IBM 4381 under VM/HPO, host IBM 4381 under MVS/XA, target...Program Office, AJPO 20. ABSTRACT (Continue on reverse side if necessary and identify by block number) International Business Machines Corporation, IBM...Standard ANSI/MIL-STD-1815A in the compiler listed in this declaration. I declare that International Business Machines Corporation is the owner of record
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
NASA Technical Reports Server (NTRS)
Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri
1991-01-01
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.
Simulation and analysis of support hardware for multiple instruction rollback
NASA Technical Reports Server (NTRS)
Alewine, Neil J.
1992-01-01
Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.
User guide for MODPATH version 6 - A particle-tracking model for MODFLOW
Pollock, David W.
2012-01-01
MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.
ERIC Educational Resources Information Center
Hardin, Julia P., Ed.; Moulden, Richard G., Ed.
This compilation of over 40 lesson plans on various topics in law related education was written by classroom teachers from around the United States who had participated in the fifth of an annual series called Special Programs in Citizenship Education (SPICE)--weeklong institutes devoted to learning about different cultures and laws. Called SPICE V…
ERIC Educational Resources Information Center
Purdue Univ., Lafayette, IN. Office of Manpower Studies.
To assess the need for technology programs in the Kokomo, Indiana area, such background data (population projections for the region, the educational level of adults living in the region, and the number and size of firms located in the area) concerning Kokomo and the counties surrounding it were compiled. The researchers formulated projected…
ERIC Educational Resources Information Center
Wisconsin State Dept. of Industry, Labor and Human Relations, Madison.
This technical assistance guide was developed to consolidate a statewide understanding of the effort to systematize the delivery of employment and training programs through the local formation of job centers in Wisconsin, and to provide a compilation, drawn from 20 local models, that explains how the programs are delivered. The guide is organized…
Follow-Up Evaluation Project. From July 1, 1981 to June 30, 1983. Final Report.
ERIC Educational Resources Information Center
Santa Fe Community Coll., Gainesville, FL.
A project was undertaken to revise a model competency-based trade and industrial education program that had been developed for use in Florida schools in a project that was implemented earlier. During the followup evaluation, the project staff compiled task listings for each of the following trade and industrial education program areas: automotive;…
ERIC Educational Resources Information Center
Chan, Ha Yin
A compilation of transcripts of 100 bilingual English/Chinese broadcast lessons for workers in the garment industry is presented. The lessons are part of the New York Chinatown Manpower Project's Workplace Literacy Program. With the support of the Sino Radio Broadcast Corporation, the lessons are broadcast daily in the morning and again after the…
ERIC Educational Resources Information Center
Khattab, Mohammad Salih
This report reviews the status of early childhood education (ECE) programs in UNICEF's Middle East and North Africa region. The report compiles information about ECE programs in 18 countries based on a questionnaire sent to UNICEF country offices and other sources. The introduction sets out the economic and social rationales for investing in early…
ERIC Educational Resources Information Center
Mississippi-Alabama Sea Grant Consortium, Ocean Springs, MS.
This bibliography was published as a result of a cooperative education effort of the United States Sea Grant programs and the staff of the Living Seas pavilion presented by United Technologies at EPCOT Center in Orlando, Florida. It is a compilation of the textbooks, curricula materials, and other marine education resource materials developed by…
Summary Report on NRL Participation in the Microwave Landing System Program.
1980-08-19
shifters were measured and statistically analyzed. Several research contracts for promising phased array techniques were awarded to industrial contractors...program was written for compiling statistical data on the measurements, which reads out inser- sertion phase characteristics and standard deviation...GLOSSARY OF TERMS ALPA Airline Pilots’ Association ATA Air Transport Association AWA Australiasian Wireless Amalgamated AWOP All-weather Operations
ERIC Educational Resources Information Center
Anderson, Nancy; And Others
This is one of a set of five handbooks compiled by the Northwest Regional Educational Laboratory that describes the processes for planning and operating a total experience-based career education (EBCE) program. Processes and material are those developed by the original EBCE model--Community Experience in Career Education (CE)2. The area of…
NASA Technical Reports Server (NTRS)
Loftin, Richard B.
1987-01-01
Turbo Prolog is a recently available, compiled version of the programming language Prolog. Turbo Prolog is designed to provide not only a Prolog compiler, but also a program development environment for the IBM Personal Computer family. An evaluation of Turbo Prolog was made, comparing its features to other versions of Prolog and to the community of languages commonly used in artificial intelligence (AI) research and development. Three programs were employed to determine the execution speed of Turbo Prolog applied to various problems. The results of this evaluation demonstrated that Turbo Prolog can perform much better than many commonly employed AI languages for numerically intensive problems and can equal the speed of development languages such as OPS5+ and CLIPS, running on the IBM PC. Applications for which Turbo Prolog is best suited include those which (1) lend themselves naturally to backward-chaining approaches, (2) require extensive use of mathematics, (3) contain few rules, (4) seek to make use of the window/color graphics capabilities of the IBM PC, and (5) require linkage to programs in other languages to form a complete executable image.
Analogy Mapping Development for Learning Programming
NASA Astrophysics Data System (ADS)
Sukamto, R. A.; Prabawa, H. W.; Kurniawati, S.
2017-02-01
Programming skill is an important skill for computer science students, whereas nowadays, there many computer science students are lack of skills and information technology knowledges in Indonesia. This is contrary with the implementation of the ASEAN Economic Community (AEC) since the end of 2015 which is the qualified worker needed. This study provided an effort for nailing programming skills by mapping program code to visual analogies as learning media. The developed media was based on state machine and compiler principle and was implemented in C programming language. The state of every basic condition in programming were successful determined as analogy visualization.
An evaluation of accessibility and content of microsurgery fellowship websites.
Hu, Jiayi; Zhen, Meng; Olteanu, Cristina; Avram, Ronen
2016-01-01
Websites for residency and fellowship programs serve as effective educational and recruitment tools. To evaluate the accessibility and content of fellowship websites that are commonly used by microsurgery applicants for career development. A list of one-year microsurgery fellowship websites (MFWs) was compiled by visiting the centralized American Society for Reconstructive Microsurgery (ASRM) website, followed by performing an extensive 'Google' search in October 2015. Accessibility of MFWs was assessed. Website content regarding key recruitment and education variables was also comprehensively reviewed. Website content was correlated with program characteristics using t tests and ANOVA (two-tailed; P<0.05 was considered to be statistically significant). A list of 53 eligible programs was compiled. Only 15 of 51 (29%) ASRM program links were functional. On average, the combined content from ASRM website and individual MFWs had 2.91 of 6 recruitment variables and 1.32 of 6 education variables, respectively. The majority of programs listed 'eligibility criteria' (87%) and 'general information' (87%). 'Evaluation criteria' were most poorly reported (4%). Recruitment score was higher for United States programs compared with international counterparts (51% versus 33%, respectively; P=0.02). It was also higher in programs that focus on 'extremity' versus 'breast' (58% versus 37%; P=0.0028). Education scores did not differ according to location, program size, subspecialty of focus or participation in the Microsurgery Match process. Information regarding recruitment and education on most MFWs is scarce. Academic institutions should keep website content up to date and comprehensive to better assist candidates in the application process.
2009 Oregon traffic crash summary
DOT National Transportation Integrated Search
2010-09-01
The Crash Analysis and Reporting Unit compiles data and publishes statistics for reported motor vehicle : traffic crashes per ORS 802.050(2) and 802.220(6). The data supports various local, county and state : traffic safety programs, engineering and ...
2008 Oregon traffic crash summary
DOT National Transportation Integrated Search
2009-09-01
The Crash Analysis and Reporting Unit compiles data and publishes statistics for reported motor vehicle : traffic crashes per ORS 802.050(2) and 802.220(6). The data supports various local, county and state : traffic safety programs, engineering and ...
2010 Oregon traffic crash summary
DOT National Transportation Integrated Search
2011-08-01
The Crash Analysis and Reporting Unit compiles data and publishes statistics for reported motor vehicle : traffic crashes per ORS 802.050(2) and 802.220(6). The data supports various local, county and state : traffic safety programs, engineering and ...
Compilation of Steady State Automotive Engine Test Data
DOT National Transportation Integrated Search
1978-09-01
Experimental data were obtained in dynamometer tests of automotive engines used in the United States. The objective of this program is to obtain engine performance data for determining fuel consumption and emissions (carbon monoxide, hydrocarbons, an...