Science.gov

Sample records for implementation compilation optimization

  1. A systolic array optimizing compiler

    SciTech Connect

    Lam, M.S. )

    1988-01-01

    This book documents the research and results of the compiler technology developed for the Warp machine. A major challenge in the development of Warp was to build an optimizing compiler for the machine. This book describes a compiler that shields most of the difficulty from the user and generates very efficient code. Several new optimizations are described and evaluated. The research described confirms that compilers play a valuable role in the development, usage and effectiveness of novel high-performance architectures.

  2. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  3. Design and implementation of a quantum compiler

    NASA Astrophysics Data System (ADS)

    Metodi, Tzvetan S.; Gasster, Samuel D.

    2010-04-01

    We present a compiler for programming quantum architectures based on the Quantum Random Access Machine (QRAM) model. The QRAM model consists of a classical subsystem responsible for generating the quantum operations that are executed on a quantum subsystem. The compiler can also be applied to trade studies for optimizing the reliability and latency of quantum programs and to determine the required error correction resources. We use the Bacon-Shor [9, 1, 3] quantum error correcting code as an example quantum program that can be processed and analyzed by the compiler.

  4. A Language for Specifying Compiler Optimizations for Generic Software

    SciTech Connect

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allow the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.

  5. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  6. Implementation of a Compiler for the Functional Programming Language PHI.

    DTIC Science & Technology

    1987-06-01

    authors think this should facilitate the understanding of both concept and implementation. The front - end of the compiler implements machine independent...CONSTRAINTS ....................................................................... 12 I. FRONT - END OF THE COMPILER...PHI compiler is shown in Figure 1.1. The front - end , containing the scanner (lexical analyzer) and parser (syntactic analyzer) is essentially responsible

  7. Resource efficient gadgets for compiling adiabatic quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; O'Gorman, Bryan; Aspuru-Guzik, Alán

    2013-11-01

    We develop a resource efficient method by which the ground-state of an arbitrary k-local, optimization Hamiltonian can be encoded as the ground-state of a (k-1)-local optimization Hamiltonian. This result is important because adiabatic quantum algorithms are often most easily formulated using many-body interactions but experimentally available interactions are generally 2-body. In this context, the efficiency of a reduction gadget is measured by the number of ancilla qubits required as well as the amount of control precision needed to implement the resulting Hamiltonian. First, we optimize methods of applying these gadgets to obtain 2-local Hamiltonians using the least possible number of ancilla qubits. Next, we show a novel reduction gadget which minimizes control precision and a heuristic which uses this gadget to compile 3-local problems with a significant reduction in control precision. Finally, we present numerics which indicate a substantial decrease in the resources required to implement randomly generated, 3-body optimization Hamiltonians when compared to other methods in the literature.

  8. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  9. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    Report: Compiler-Driven Performance Optimization and Tuning for Multicore Architectures The widespread emergence of multicore processors as the computing...applications have enjoyed the free-ride of performance improvement with each new processor generation. The reality today is that existing and new...applications must be changed to make them multi-threaded if they are to experience any performance benefits from newer generations of processors . An

  10. Final Project Report: A Polyhedral Transformation Framework for Compiler Optimization

    SciTech Connect

    Sadayappan, Ponnuswamy; Rountev, Atanas

    2015-06-15

    The project developed the polyhedral compiler transformation module PolyOpt/Fortran in the ROSE compiler framework. PolyOpt/Fortran performs automated transformation of affine loop nests within FORTRAN programs for enhanced data locality and parallel execution. A FORTAN version of the Polybench library was also developed by the project. A third development was a dynamic analysis approach to gauge vectorization potential within loops of programs; software (DDVec) for automated instrumentation and dynamic analysis of programs was developed.

  11. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  12. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    SciTech Connect

    Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev

    2014-03-01

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.

  13. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum ...devices designed to run only this type of quantum algorithm . Other types of quan- tum algorithms are known that take on quite a different form, and are

  14. Optimization guide for programs compiled under IBM FORTRAN H (OPT=2)

    NASA Technical Reports Server (NTRS)

    Smith, D. M.; Dobyns, A. H.; Marsh, H. M.

    1977-01-01

    Guidelines are given to provide the programmer with various techniques for optimizing programs when the FORTRAN IV H compiler is used with OPT=2. Subroutines and programs are described in the appendices along with a timing summary of all the examples given in the manual.

  15. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  16. Multiprocessors and runtime compilation

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Berryman, Harry; Wu, Janet

    1990-01-01

    Runtime preprocessing plays a major role in many efficient algorithms in computer science, as well as playing an important role in exploiting multiprocessor architectures. Examples are given that elucidate the importance of runtime preprocessing and show how these optimizations can be integrated into compilers. To support the arguments, transformations implemented in prototype multiprocessor compilers are described and benchmarks from the iPSC2/860, the CM-2, and the Encore Multimax/320 are presented.

  17. Kokkos GPU Compiler

    SciTech Connect

    Moss, Nicholas

    2016-07-15

    The Kokkos Clang compiler is a version of the Clang C++ compiler that has been modified to perform targeted code generation for Kokkos constructs in the goal of generating highly optimized code and to provide semantic (domain) awareness throughout the compilation toolchain of these constructs such as parallel for and parallel reduce. This approach is taken to explore the possibilities of exposing the developer’s intentions to the underlying compiler infrastructure (e.g. optimization and analysis passes within the middle stages of the compiler) instead of relying solely on the restricted capabilities of C++ template metaprogramming. To date our current activities have focused on correct GPU code generation and thus we have not yet focused on improving overall performance. The compiler is implemented by recognizing specific (syntactic) Kokkos constructs in order to bypass normal template expansion mechanisms and instead use the semantic knowledge of Kokkos to directly generate code in the compiler’s intermediate representation (IR); which is then translated into an NVIDIA-centric GPU program and supporting runtime calls. In addition, by capturing and maintaining the higher-level semantics of Kokkos directly within the lower levels of the compiler has the potential for significantly improving the ability of the compiler to communicate with the developer in the terms of their original programming model/semantics.

  18. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    SciTech Connect

    Choudhary, Alok; Kandemir, Mahmut

    2015-03-18

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions.

  19. Compiler Optimizations as a Countermeasure against Side-Channel Analysis in MSP430-Based Devices

    PubMed Central

    Malagón, Pedro; de Goyeneche, Juan-Mariano; Zapater, Marina; Moya, José M.; Banković, Zorana

    2012-01-01

    Ambient Intelligence (AmI) requires devices everywhere, dynamic and massively distributed networks of low-cost nodes that, among other data, manage private information or control restricted operations. MSP430, a 16-bit microcontroller, is used in WSN platforms, as the TelosB. Physical access to devices cannot be restricted, so attackers consider them a target of their malicious attacks in order to obtain access to the network. Side-channel analysis (SCA) easily exploits leakages from the execution of encryption algorithms that are dependent on critical data to guess the key value. In this paper we present an evaluation framework that facilitates the analysis of the effects of compiler and backend optimizations on the resistance against statistical SCA. We propose an optimization-based software countermeasure that can be used in current low-cost devices to radically increase resistance against statistical SCA, analyzed with the new framework. PMID:22969383

  20. Compiler optimizations as a countermeasure against side-channel analysis in MSP430-based devices.

    PubMed

    Malagón, Pedro; de Goyeneche, Juan-Mariano; Zapater, Marina; Moya, José M; Banković, Zorana

    2012-01-01

    Ambient Intelligence (AmI) requires devices everywhere, dynamic and massively distributed networks of low-cost nodes that, among other data, manage private information or control restricted operations. MSP430, a 16-bit microcontroller, is used in WSN platforms, as the TelosB. Physical access to devices cannot be restricted, so attackers consider them a target of their malicious attacks in order to obtain access to the network. Side-channel analysis (SCA) easily exploits leakages from the execution of encryption algorithms that are dependent on critical data to guess the key value. In this paper we present an evaluation framework that facilitates the analysis of the effects of compiler and backend optimizations on the resistance against statistical SCA. We propose an optimization-based software countermeasure that can be used in current low-cost devices to radically increase resistance against statistical SCA, analyzed with the new framework.

  1. Schedule optimization study implementation plan

    SciTech Connect

    Not Available

    1993-11-01

    This Implementation Plan is intended to provide a basis for improvements in the conduct of the Environmental Restoration (ER) Program at Hanford. The Plan is based on the findings of the Schedule Optimization Study (SOS) team which was convened for two weeks in September 1992 at the request of the U.S. Department of Energy (DOE) Richland Operations Office (RL). The need for the study arose out of a schedule dispute regarding the submission of the 1100-EM-1 Operable Unit (OU) Remedial Investigation/Feasibility Study (RI/FS) Work Plan. The SOS team was comprised of independent professionals from other federal agencies and the private sector experienced in environmental restoration within the federal system. The objective of the team was to examine reasons for the lengthy RI/FS process and recommend ways to expedite it. The SOS team issued their Final Report in December 1992. The report found the most serious impediments to cleanup relate to a series of management and policy issues which are within the control of the three parties managing and monitoring Hanford -- the DOE, U.S. Environmental Protection Agency (EPA), and the State of Washington Department of Ecology (Ecology). The SOS Report identified the following eight cross-cutting issues as the root of major impediments to the Hanford Site cleanup. Each of these eight issues is quoted from the SOS Report followed by a brief, general description of the proposed approach being developed.

  2. Implementation of the Altair optimization processes

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm J.; Véran, Jean-Pierre

    2003-02-01

    Altair is the adaptive optics system developed by NRC Canada for the Gemini North Telescope. Altair uses modal control and a quad-cell based Shack-Hartmann wavefront sensor. In order for Altair to adapt to changes in the observing conditions, two optimizers are activated when the AO loop is closed. These optimizers are the modal gain optimizer (MGO) and the centroid gain optimizer (CGO). This paper discusses the implementation and timing results of these optimizers.

  3. OptQC v1.3: An (updated) optimized parallel quantum compiler

    NASA Astrophysics Data System (ADS)

    Loke, T.; Wang, J. B.

    2016-10-01

    We present a revised version of the OptQC program of Loke et al. (2014) [1]. We have removed the simulated annealing process in favour of a descending random walk. We have also introduced a new method for iteratively generating permutation matrices during the random walk process, providing a reduced total cost for implementing the quantum circuit. Lastly, we have also added a synchronization mechanism between threads, giving quicker convergence to more optimal solutions.

  4. Design and Implementation of a Basic Cross-Compiler and Virtual Memory Management System for the TI-59 Programmable Calculator.

    DTIC Science & Technology

    1983-06-01

    previously stated requirements to construct the framework for a software soluticn. It is during this phase of design that lany cf the most critical...the linker would have to be deferred until the compiler was formalized and ir the implementation phase of design. The second problem involved...memory liait was encountered. At this point a segmentation occurred. The memory limits were reset and the combining process continued until another

  5. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  6. Quantum control implemented as combinatorial optimization.

    PubMed

    Strohecker, Traci; Rabitz, Herschel

    2010-01-15

    Optimal control theory provides a general means for designing controls to manipulate quantum phenomena. Traditional implementation requires solving coupled nonlinear equations to obtain the optimal control solution, whereas this work introduces a combinatorial quantum control (CQC) algorithm to avoid this complexity. The CQC technique uses a predetermined toolkit of small time step propagators in conjunction with combinatorial optimization to identify a proper sequence for the toolkit members. Results indicate that the CQC technique exhibits invariance of search effort to the number of system states and very favorable scaling upon comparison to a standard gradient algorithm, taking into consideration that CQC is easily parallelizable.

  7. A Data Model for Compiling Heterogeneous Bathymetric Soundings, and its Implementation in the North Atlantic

    NASA Astrophysics Data System (ADS)

    Hell, B.; Jakobsson, M.; Macnab, R.; Mayer, L. A.

    2006-12-01

    The North Atlantic is arguably the best mapped ocean in the world, with a huge quantity of inconsistent sounding data featuring a tremendous variability in accuracy, resolution and density. Therefore it is an ideal test area for data compilation techniques. For the compilation of a new Digital Bathymetric Model (DBM) of the North Atlantic, a combination of a GIS and a spatial database is used for data storage, verification and processing. A data model has been developed that can flexibly accommodate all kinds of raw and processed data, resulting in a data warehouse schema with metadata (describing data acquisition and quality) as separate data dimensions. Future work will involve data quality analysis based on metadata information and cross-survey checks, development of algorithms for merging and gridding heterogeneous sounding data, research on variable grids for bathymetric data, the treatment of error propagation through the gridding process and the production of a high-resolution (approx. 500 m) DBM accompanied by a confidence model. The proposed International Bathymetric Chart of the North Atlantic (IBCNA) is an undertaking to assemble and to rationalize all available bathymetric observations from the Atlantic Ocean and adjacent seas north of the Equator and south of 64°N into a consistent DBM. Neither of today's most commonly-used large scale models -- GEBCO (based upon digitized contours derived from single beam echo sounding measurements) and ETOPO2 (satellite altimetry combined with single beam echo soundings) -- incorporates the large amount of recent multibeam echo sounding data, and there is a need for a more up-to-date DBM. This could serve a broad variety of scientific and technical purposes such as geological investigations, future survey and field operation planning, oceanographic modeling, deep ocean tsunami propagation research, habitat mapping and biodiversity studies and evaluating the long-term effect of sea level change on coastal areas. In

  8. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    Corporation , Order No. GY28-6800-5 (December 1971). [IBM72] FORTRAN IV (H) Compiler Program Logic Manual, IBM Corporation , Order No. GH28-6642-5...RAVC ptans6 and excutes~ ’Leseatc, devetopment, .te,~t and 4etected acquisition p)LopLamn in -sappo~~t o6 Command, Con-tt Commnications ~ and

  9. Read buffer optimizations to support compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.

    1993-01-01

    Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.

  10. Implementing the optimal provision of ecosystem services.

    PubMed

    Polasky, Stephen; Lewis, David J; Plantinga, Andrew J; Nelson, Erik

    2014-04-29

    Many ecosystem services are public goods whose provision depends on the spatial pattern of land use. The pattern of land use is often determined by the decisions of multiple private landowners. Increasing the provision of ecosystem services, though beneficial for society as a whole, may be costly to private landowners. A regulator interested in providing incentives to landowners for increased provision of ecosystem services often lacks complete information on landowners' costs. The combination of spatially dependent benefits and asymmetric cost information means that the optimal provision of ecosystem services cannot be achieved using standard regulatory or payment for ecosystem services approaches. Here we show that an auction that sets payments between landowners and the regulator for the increased value of ecosystem services with conservation provides incentives for landowners to truthfully reveal cost information, and allows the regulator to implement the optimal provision of ecosystem services, even in the case with spatially dependent benefits and asymmetric information.

  11. Implementing the optimal provision of ecosystem services

    PubMed Central

    Polasky, Stephen; Lewis, David J.; Plantinga, Andrew J.; Nelson, Erik

    2014-01-01

    Many ecosystem services are public goods whose provision depends on the spatial pattern of land use. The pattern of land use is often determined by the decisions of multiple private landowners. Increasing the provision of ecosystem services, though beneficial for society as a whole, may be costly to private landowners. A regulator interested in providing incentives to landowners for increased provision of ecosystem services often lacks complete information on landowners’ costs. The combination of spatially dependent benefits and asymmetric cost information means that the optimal provision of ecosystem services cannot be achieved using standard regulatory or payment for ecosystem services approaches. Here we show that an auction that sets payments between landowners and the regulator for the increased value of ecosystem services with conservation provides incentives for landowners to truthfully reveal cost information, and allows the regulator to implement the optimal provision of ecosystem services, even in the case with spatially dependent benefits and asymmetric information. PMID:24722635

  12. Compiler blockability of dense matrix factorizations.

    SciTech Connect

    Carr, S.; Lehoucq, R. B.; Mathematics and Computer Science; Michigan Technological Univ.

    1997-09-01

    The goal of the LAPACK project is to provide efficient and portable software for dense numerical linear algebra computations. By recasting many of the fundamental dense matrix computations in terms of calls to an efficient implementation of the BLAS (Basic Linear Algebra Subprograms), the LAPACK project has, in large part, achieved its goal. Unfortunately, the efficient implementation of the BLAS results often in machine-specific code that is not portable across multiple architectures without a significant loss in performance or a significant effort to reoptimize them. This article examines whether most of the hand optimizations performed on matrix factorization codes are unnecessary because they can (and should) be performed by the compiler. We believe that it is better for the programmer to express algorithms in a machine-independent form and allow the compiler to handle the machine-dependent details. This gives the algorithms portability across architectures and removes the error-prone, expensive and tedious process of hand optimization. Although there currently exist no production compilers that can perform all the loop transformations discussed in this article, a description of current research in compiler technology is provided that will prove beneficial to the numerical linear algebra community. We show that the Cholesky and optimized automatically by a compiler to be as efficient as the same hand-optimized version found in LAPACK. We also show that the QR factorization may be optimized by the compiler to perform comparably with the hand-optimized LAPACK version on modest matrix sizes. Our approach allows us to conclude that with the advent of the compiler optimizations discussed in this article, matrix factorizations may be efficiently implemented in a BLAS-less form.

  13. HOPE: Just-in-time Python compiler for astrophysical computations

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  14. A Mathematical Approach for Compiling and Optimizing Hardware Implementations of DSP Transforms

    DTIC Science & Technology

    2010-08-01

    across multiple transforms datatypes, and design goals, and its results show that Spiral is able to automatically provide a wide tradeoff between cost (e.g...multiple trans- forms, datatypes, and design goals, and its results show that Spiral is able to automatically provide i ii a wide tradeoff between cost... wide range of algorithmic and datapath options and frees the designer from the difficult process of manually performing algorithmic and datapath

  15. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  16. An Advanced Compiler Designed for a VLIW DSP for Sensors-Based Systems

    PubMed Central

    Yang, Xu; He, Hu

    2012-01-01

    The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors. PMID:22666040

  17. An advanced compiler designed for a VLIW DSP for sensors-based systems.

    PubMed

    Yang, Xu; He, Hu

    2012-01-01

    The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors.

  18. Feedback Implementation of Zermelo's Optimal Control by Sugeno Approximation

    NASA Technical Reports Server (NTRS)

    Clifton, C.; Homaifax, A.; Bikdash, M.

    1997-01-01

    This paper proposes an approach to implement optimal control laws of nonlinear systems in real time. Our methodology does not require solving two-point boundary value problems online and may not require it off-line either. The optimal control law is learned using the original Sugeno controller (OSC) from a family of optimal trajectories. We compare the trajectories generated by the OSC and the trajectories yielded by the optimal feedback control law when applied to Zermelo's ship steering problem.

  19. Optimizing Cancer Care Delivery through Implementation Science

    PubMed Central

    Adesoye, Taiwo; Greenberg, Caprice C.; Neuman, Heather B.

    2016-01-01

    The 2013 Institute of Medicine report investigating cancer care concluded that the cancer care delivery system is in crisis due to an increased demand for care, increasing complexity of treatment, decreasing work force, and rising costs. Engaging patients and incorporating evidence-based care into routine clinical practice are essential components of a high-quality cancer delivery system. However, a gap currently exists between the identification of beneficial research findings and the application in clinical practice. Implementation research strives to address this gap. In this review, we discuss key components of high-quality implementation research. We then apply these concepts to a current cancer care delivery challenge in women’s health, specifically the implementation of a surgery decision aid for women newly diagnosed with breast cancer. PMID:26858933

  20. Optimal Implementations for Reliable Circadian Clocks

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-09-01

    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  1. Financing and funding health care: Optimal policy and political implementability.

    PubMed

    Nuscheler, Robert; Roeder, Kerstin

    2015-07-01

    Health care financing and funding are usually analyzed in isolation. This paper combines the corresponding strands of the literature and thereby advances our understanding of the important interaction between them. We investigate the impact of three modes of health care financing, namely, optimal income taxation, proportional income taxation, and insurance premiums, on optimal provider payment and on the political implementability of optimal policies under majority voting. Considering a standard multi-task agency framework we show that optimal health care policies will generally differ across financing regimes when the health authority has redistributive concerns. We show that health care financing also has a bearing on the political implementability of optimal health care policies. Our results demonstrate that an isolated analysis of (optimal) provider payment rests on very strong assumptions regarding both the financing of health care and the redistributive preferences of the health authority.

  2. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  3. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  4. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    PubMed

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  5. Optimization of an optically implemented on-board FDMA demultiplexer

    NASA Technical Reports Server (NTRS)

    Fargnoli, J.; Riddle, L.

    1991-01-01

    Performance of a 30 GHz frequency division multiple access (FDMA) uplink to a processing satellite is modelled for the case where the onboard demultiplexer is implemented optically. Included in the performance model are the effects of adjacent channel interference, intersymbol interference, and spurious signals associated with the optical implementation. Demultiplexer parameters are optimized to provide the minimum bit error probability at a given bandwidth efficiency when filtered QPSK modulation is employed.

  6. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  7. The Specification of Source-to-source Transformations for the Compile-time Optimization of Parallel Object-oriented Scientific Applications

    SciTech Connect

    Quinlan, D; Kowarschik, M

    2001-06-05

    The performance of object-oriented applications in scientific computing often suffers from the inefficient use of high-level abstractions provided by underlying libraries. Since these library abstractions are not part of the programming language itself there is no compiler mechanism to respect their semantics and thus to perform appropriate optimizations, e.g., array semantics within object-oriented array class libraries which permit parallel optimizations inconceivable to the serial compiler. We have presented the ROSE infrastructure as a tool for automatically generating library-specific preprocessors. These preprocessors can perform sematics-based source-to-source transformations of the application in order to introduce high-level code optimizations. In this paper we outline the design of ROSE and focus on the discussion of various approaches for specifying and processing complex source code transformations. These techniques are supposed to be as easy and intuitive as possible for the ROSE users, i.e. for the designers of the library-specific preprocessors.

  8. HAL/S-FC compiler system specifications

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  9. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  10. All-Optical Implementation of the Ant Colony Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-05-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems.

  11. Implementing size-optimal discrete neural networks require analog circuitry

    SciTech Connect

    Beiu, V.

    1998-12-01

    This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  12. Implementation and Optimization of Image Processing Algorithms on Embedded GPU

    NASA Astrophysics Data System (ADS)

    Singhal, Nitin; Yoo, Jin Woo; Choi, Ho Yeol; Park, In Kyu

    In this paper, we analyze the key factors underlying the implementation, evaluation, and optimization of image processing and computer vision algorithms on embedded GPU using OpenGL ES 2.0 shader model. First, we present the characteristics of the embedded GPU and its inherent advantage when compared to embedded CPU. Additionally, we propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), speeded-up robust feature (SURF) detection, and stereo matching as our example algorithms. Performance is evaluated in terms of the execution time and speed-up achieved in comparison with the implementation on embedded CPU.

  13. Implementation of generalized optimality criteria in a multidisciplinary environment

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.; Venkayya, V. B.

    1989-01-01

    A generalized optimality criterion method consisting of a dual problem solver combined with a compound scaling algorithm was implemented in the multidisciplinary design tool, ASTROS. This method enables, for the first time in a production design tool, the determination of a minimum weight design using thousands of independent structural design variables while simultaneously considering constraints on response quantities in several disciplines. Even for moderately large examples, the computational efficiency is improved significantly relative to the conventional approach.

  14. Optimal control in NMR spectroscopy: numerical implementation in SIMPSON.

    PubMed

    Tosner, Zdenek; Vosegaard, Thomas; Kehlet, Cindie; Khaneja, Navin; Glaser, Steffen J; Nielsen, Niels Chr

    2009-04-01

    We present the implementation of optimal control into the open source simulation package SIMPSON for development and optimization of nuclear magnetic resonance experiments for a wide range of applications, including liquid- and solid-state NMR, magnetic resonance imaging, quantum computation, and combinations between NMR and other spectroscopies. Optimal control enables efficient optimization of NMR experiments in terms of amplitudes, phases, offsets etc. for hundreds-to-thousands of pulses to fully exploit the experimentally available high degree of freedom in pulse sequences to combat variations/limitations in experimental or spin system parameters or design experiments with specific properties typically not covered as easily by standard design procedures. This facilitates straightforward optimization of experiments under consideration of rf and static field inhomogeneities, limitations in available or desired rf field strengths (e.g., for reduction of sample heating), spread in resonance offsets or coupling parameters, variations in spin systems etc. to meet the actual experimental conditions as close as possible. The paper provides a brief account on the relevant theory and in particular the computational interface relevant for optimization of state-to-state transfer (on the density operator level) and the effective Hamiltonian on the level of propagators along with several representative examples within liquid- and solid-state NMR spectroscopy.

  15. Optimal clinical implementation of the Siemens virtual wedge.

    PubMed

    Walker, C P; Richmond, N D; Lambert, G D

    2003-01-01

    Installation of a modern high-energy Siemens Primus linear accelerator at the Northern Centre for Cancer Treatment (NCCT) provided the opportunity to investigate the optimal clinical implementation of the Siemens virtual wedge filter. Previously published work has concentrated on the production of virtual wedge angles at 15 degrees, 30 degrees, 45 degrees, and 60 degrees as replacements for the Siemens hard wedges of the same nominal angles. However, treatment plan optimization of the dose distribution can be achieved with the Primus, as its control software permits the selection of any virtual wedge angle from 15 degrees to 60 degrees in increments of 1 degrees. The same result can also be produced from a combination of open and 60 degrees wedged fields. Helax-TMS models both of these modes of virtual wedge delivery by the wedge angle and the wedge fraction methods respectively. This paper describes results of timing studies in the planning of optimized patient dose distributions by both methods and in the subsequent treatment delivery procedures. Employment of the wedge fraction method results in the delivery of small numbers of monitor units to the beam's central axis; therefore, wedge profile stability and delivered dose with low numbers of monitor units were also investigated. The wedge fraction was proven to be the most efficient method when the time taken for both planning and treatment delivery were taken into consideration, and is now used exclusively for virtual wedge treatment delivery in Newcastle. It has also been shown that there are no unfavorable dosimetric consequences from its practical implementation.

  16. Designing a stencil compiler for the Connection Machine model CM-5

    SciTech Connect

    Brickner, R.G.; Holian, K.; Thiagarajan, B.; Johnsson, S.L. |

    1994-12-31

    In this paper the authors present the design of a stencil compiler for the Connection Machine system CM-5. The stencil compiler will optimize the data motion between processing nodes, minimize the data motion within a node, and minimize the data motion between registers and local memory in a node. The compiler will natively support two-dimensional stencils, but stencils in three dimensions will be automatically decomposed. Lower dimensional stencils are treated as degenerate stencils. The compiler will be integrated as part of the CM Fortran programming system. Much of the compiler code will be adapted from the CM-2/200 stencil compiler, which is part of CMSSL (the Connection Machine Scientific Software Library) Release 3.1 for the CM-2/200, and the compiler will be available as part of the Connection Machine Scientific Software Library (CMSSL) for the CM-5. In addition to setting down design considerations, they report on the implementation status of the stencil compiler. In particular, they discuss optimization strategies and status of code conversion from CM-2/200 to CM-5 architecture, and report on the measured performance of prototype target code which the compiler will generate.

  17. Optimized evaporation technique for leachate treatment: Small scale implementation.

    PubMed

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature.

  18. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  19. Ada Compiler Validation Summary Report: Certificate Number: 910121I1. 11124 TeleSoft, TeleGen2 Ada Cross Development System, Version 4.1, for VAX/VMS to 68k, MicroVAX 3800(Host) to Motorola MVME 133A-20 (MC68020) (Target).

    DTIC Science & Technology

    1991-02-11

    can be used with the compiler or the optimizer (’OPTIIZE). Using the /SQUEEZE qualifier duri -ng compilation causes the intermediate forms to be...implementation- depen- dent characteristics: " Interface (assembly Fortran, Pascal . and C) " List and Page (in context of source/error compiler

  20. Python based high-level synthesis compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  1. Optimal control of ICU patient discharge: from theory to implementation.

    PubMed

    Mallor, Fermín; Azcárate, Cristina; Barado, Julio

    2015-09-01

    This paper deals with the management of scarce health care resources. We consider a control problem in which the objective is to minimize the rate of patient rejection due to service saturation. The scope of decisions is limited, in terms both of the amount of resources to be used, which are supposed to be fixed, and of the patient arrival pattern, which is assumed to be uncontrollable. This means that the only potential areas of control are speed or completeness of service. By means of queuing theory and optimization techniques, we provide a theoretical solution expressed in terms of service rates. In order to make this theoretical analysis useful for the effective control of the healthcare system, however, further steps in the analysis of the solution are required: physicians need flexible and medically-meaningful operative rules for shortening patient length of service to the degree needed to give the service rates dictated by the theoretical analysis. The main contribution of this paper is to discuss how the theoretical solutions can be transformed into effective management rules to guide doctors' decisions. The study examines three types of rules based on intuitive interpretations of the theoretical solution. Rules are evaluated through implementation in a simulation model. We compare the service rates provided by the different policies with those dictated by the theoretical solution. Probabilistic analysis is also included to support rule validity. An Intensive Care Unit is used to illustrate this control problem. The study focuses on the Markovian case before moving on to consider more realistic LoS distributions (Weibull, Lognormal and Phase-type distribution).

  2. Testing-Based Compiler Validation for Synchronous Languages

    NASA Technical Reports Server (NTRS)

    Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier

    2014-01-01

    In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.

  3. Compiler-assisted static checkpoint insertion

    NASA Technical Reports Server (NTRS)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1992-01-01

    This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.

  4. NONMEM version III implementation on a VAX 9000: a DCL procedure for single-step execution and the unrealized advantage of a vectorizing FORTRAN compiler.

    PubMed

    Vielhaber, J P; Kuhlman, J V; Barrett, J S

    1993-06-01

    There is great interest within the FDA, academia, and the pharmaceutical industry to provide more detailed information about the time course of drug concentration and effect in subjects receiving a drug as part of their overall therapy. Advocates of this effort expect the eventual goal of these endeavors to provide labeling which reflects the experience of drug administration to the entire population of potential recipients. The set of techniques which have been thus far applied to this task has been defined as population approach methodologies. While a consensus view on the usefulness of these techniques is not likely to be formed in the near future, most pharmaceutical companies or individuals who provide kinetic/dynamic support for drug development programs are investigating population approach methods. A major setback in this investigation has been the shortage of computational tools to analyze population data. One such algorithm, NONMEM, supplied by the NONMEM Project Group of the University of California, San Francisco has been widely used and remains the most accessible computational tool to date. The program is distributed to users as FORTRAN 77 source code with instructions for platform customization. Given the memory and compiler requirements of this algorithm and the intensive matrix manipulation required for run convergence and parameter estimation, this program's performance is largely determined by the platform and the FORTRAN compiler used to create the NONMEM executable. Benchmark testing on a VAX 9000 with Digital's FORTRAN (v. 1.2) compiler suggests that this is an acceptable platform. Due to excessive branching within the loops of the NONMEM source code, the vector processing capabilities of the KV900-AA vector processor actually decrease performance. A DCL procedure is given to provide single step execution of this algorithm.

  5. Automatic OPC repair flow: optimized implementation of the repair recipe

    NASA Astrophysics Data System (ADS)

    Bahnas, Mohamed; Al-Imam, Mohamed; Word, James

    2007-10-01

    Virtual manufacturing that is enabled by rapid, accurate, full-chip simulation is a main pillar in achieving successful mask tape-out in the cutting-edge low-k1 lithography. It facilitates detecting printing failures before a costly and time-consuming mask tape-out and wafer print occur. The OPC verification step role is critical at the early production phases of a new process development, since various layout patterns will be suspected that they might to fail or cause performance degradation, and in turn need to be accurately flagged to be fed back to the OPC Engineer for further learning and enhancing in the OPC recipe. At the advanced phases of the process development, there is much less probability of detecting failures but still the OPC Verification step act as the last-line-of-defense for the whole RET implemented work. In recent publication the optimum approach of responding to these detected failures was addressed, and a solution was proposed to repair these defects in an automated methodology and fully integrated and compatible with the main RET/OPC flow. In this paper the authors will present further work and optimizations of this Repair flow. An automated analysis methodology for root causes of the defects and classification of them to cover all possible causes will be discussed. This automated analysis approach will include all the learning experience of the previously highlighted causes and include any new discoveries. Next, according to the automated pre-classification of the defects, application of the appropriate approach of OPC repair (i.e. OPC knob) on each classified defect location can be easily selected, instead of applying all approaches on all locations. This will help in cutting down the runtime of the OPC repair processing and reduce the needed number of iterations to reach the status of zero defects. An output report for existing causes of defects and how the tool handled them will be generated. The report will with help further learning

  6. Implementation and Optimization of an Inverse Photoemission Spectroscopy Setup

    NASA Astrophysics Data System (ADS)

    Gina, Ervin

    Inverse photoemission spectroscopy (IPES) is utilized for determining the unoccupied electron states of materials. It is a complementary technique to the widely used photoemission spectroscopy (PES) as it analyzes what PES cannot, the states above the Fermi energy. This method is essential to investigating the structure of a solid and its states. IPES has a broad range of uses and is only recently being utilized. This thesis describes the setup, calibration and operation of an IPES experiment. The IPES setup consists of an electron gun which emits electrons towards a sample, where photons are released, which are measured in isochromat mode via a photon detector of a set energy bandwidth. By varying the electron energy at the source, a spectrum of the unoccupied density of states can be obtained. Since IPES is not commonly commercially available the design consists of many custom made components. The photon detector operates as a bandpass filter with a mixture of acetone/argon and a CaF2 window setting the cutoff energies. The counter electronics consist of a pre-amplifier, amplifier and analyzer to detect the count rate at each energy level above the Fermi energy. Along with designing the hardware components, a Labview program was written to capture and log the data for further analysis. The software features several operating modes including automated scanning which allows the user to enter the desired scan parameters and the program will scan the sample accordingly. Also implemented in the program is the control of various external components such as the electron gun and high voltage power supply. The new setup was tested for different gas mixtures and an optimum ratio was determined. Subsequently, IPES scans of several sample materials were performed for testing and optimization. A scan of Au was utilized for the determination of the Fermi edge energy and for comparison to literature spectra. The Fermi edge energy was then used in a measurement of indium tin

  7. Design and Laboratory Implementation of Autonomous Optimal Motion Planning for Non-Holonomic Planetary Rovers

    DTIC Science & Technology

    2012-12-01

    LABORATORY IMPLEMENTATION OF AUTONOMOUS OPTIMAL MOTION PLANNING FOR NON-HOLONOMIC PLANETARY ROVERS by Travis K. Bateman December 2012 Thesis Co...MOTION PLANNING FOR NON-HOLONOMIC PLANETARY ROVERS 5. FUNDING NUMBERS 6. AUTHOR(S) Travis K. Bateman 7. PERFORMING ORGANIZATION NAME(S) AND... planetary rover. The optimal trajectories were implemented at the Control and Optimization Laboratories with a TRAXXAS remote controlled vehicle modified

  8. An implementable algorithm for the optimal design centering, tolerancing, and tuning problem

    SciTech Connect

    Polak, E.

    1982-05-01

    An implementable master algorithm for solving optimal design centering, tolerancing, and tuning problems is presented. This master algorithm decomposes the original nondifferentiable optimization problem into a sequence of ordinary nonlinear programming problems. The master algorithm generates sequences with accumulation points that are feasible and satisfy a new optimality condition, which is shown to be stronger than the one previously used for these problems.

  9. Livermore Compiler Analysis Loop Suite

    SciTech Connect

    Hornung, R. D.

    2013-03-01

    LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermore Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.

  10. Spacelab user implementation assessment study. Volume 2: Concept optimization

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The integration and checkout activities of Spacelab payloads consist of two major sets of tasks: support functions, and test and operations. The support functions are definitized and the optimized approach for the accomplishment of these functions are delineated. Comparable data are presented for test and operations activities.

  11. Design and Experimental Implementation of Optimal Spacecraft Antenna Slews

    DTIC Science & Technology

    2013-12-01

    any spacecraft antenna configuration. Various software suites were used to perform thorough validation and verification of the Newton -Euler...verification of the Newton -Euler formulation developed herein. The antenna model was then utilized to solve an optimal control problem for a geostationary...DEVELOPING A MULTI-BODY DYNAMIC MODEL ........................................9  A.  THE NEWTON -EULER APPROACH

  12. Array-Pattern-Match Compiler for Opportunistic Data Analysis

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A computer program has been written to facilitate real-time sifting of scientific data as they are acquired to find data patterns deemed to warrant further analysis. The patterns in question are of a type denoted array patterns, which are specified by nested parenthetical expressions. [One example of an array pattern is ((>3) 0 (not=1)): this pattern matches a vector of at least three elements, the first of which exceeds 3, the second of which is 0, and the third of which does not equal 1.] This program accepts a high-level description of a static array pattern and compiles a highly optimal and compact other program to determine whether any given instance of any data array matches that pattern. The compiler implemented by this program is independent of the target language, so that as new languages are used to write code that processes scientific data, they can easily be adapted to this compiler. This program runs on a variety of different computing platforms. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware.

  13. Endgame implementations for the Efficient Global Optimization (EGO) algorithm

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Teresa H.; Kaanta, Bryan

    2009-05-01

    Efficient Global Optimization (EGO) is a competent evolutionary algorithm which can be useful for problems with expensive cost functions [1,2,3,4,5]. The goal is to find the global minimum using as few function evaluations as possible. Our research indicates that EGO requires far fewer evaluations than genetic algorithms (GAs). However, both algorithms do not always drill down to the absolute minimum, therefore the addition of a final local search technique is indicated. In this paper, we introduce three "endgame" techniques. The techniques can improve optimization efficiency (fewer cost function evaluations) and, if required, they can provide very accurate estimates of the global minimum. We also report results using a different cost function than the one previously used [2,3].

  14. Mechanical systems: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation of several mechanized systems is presented. The articles are contained in three sections: robotics, industrial mechanical systems, including several on linear and rotary systems and lastly mechanical control systems, such as brakes and clutches.

  15. Analytical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.

  16. Uranium Location Database Compilation

    EPA Pesticide Factsheets

    EPA has compiled mine location information from federal, state, and Tribal agencies into a single database as part of its investigation into the potential environmental hazards of wastes from abandoned uranium mines in the western United States.

  17. Development and Implementation of Practical Optimal LES Models

    DTIC Science & Technology

    2007-03-31

    moment equation simulations using QNA were quite accurate for small time intervals and displayed unphysical behavior only after long simulation times 16...flux model for the optimization of the variance of the model. After long simulation times, the results from the dynamic models exhibit inaccurate large...155) 2rf8 + f ′ 7 + 5rf10 + r 2f ′10 = 0 (156) r2f ′12 + 7rf12 + f ′ 11 = 0 (157) After the imposition of the continuity constraint, RI and RQ have

  18. Configuring artificial neural networks to implement function optimization

    NASA Astrophysics Data System (ADS)

    Sundaram, Ramakrishnan

    2002-04-01

    Threshold binary networks of the discrete Hopfield-type lead to the efficient retrieval of the regularized least-squares (LS) solution in certain inverse problem formulations. Partitions of these networks are identified based on forms of representation of the data. The objective criterion is optimized using sequential and parallel updates on these partitions. The algorithms consist of minimizing a suboptimal objective criterion in the currently active partition. Once the local minima is attained, an inactive partition is chosen to continue the minimization. This strategy is especially effective when substantial data must be processed by resources which are constrained either in space or available bandwidth.

  19. Implementing size-optimal discrete neural networks requires analog circuitry

    SciTech Connect

    Beiu, V.

    1998-03-01

    Neural networks (NNs) have been experimentally shown to be quite effective in many applications. This success has led researchers to undertake a rigorous analysis of the mathematical properties that enable them to perform so well. It has generated two directions of research: (i) to find existence/constructive proofs for what is now known as the universal approximation problem; (ii) to find tight bounds on the size needed by the approximation problem (or some particular cases). The paper will focus on both aspects, for the particular case when the functions to be implemented are Boolean.

  20. HAL/S-FC and HAL/S-360 compiler system program description

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.

  1. Experimental implementation of an adiabatic quantum optimization algorithm

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias; van Dam, Wim; Hogg, Tad; Breyta, Greg; Chuang, Isaac

    2003-03-01

    A novel quantum algorithm using adiabatic evolution was recently presented by Ed Farhi [1] and Tad Hogg [2]. This algorithm represents a remarkable discovery because it offers new insights into the usefulness of quantum resources. An experimental demonstration of an adiabatic algorithm has remained beyond reach because it requires an experimentally accessible Hamiltonian which encodes the problem and which must also be smoothly varied over time. We present tools to overcome these difficulties by discretizing the algorithm and extending average Hamiltonian techniques [3]. We used these techniques in the first experimental demonstration of an adiabatic optimization algorithm: solving an instance of the MAXCUT problem using three qubits and nuclear magnetic resonance techniques. We show that there exists an optimal run-time of the algorithm which can be predicted using a previously developed decoherence model. [1] E. Farhi et al., quant-ph/0001106 (2000) [2] T. Hogg, PRA, 61, 052311 (2000) [3] W. Rhim, A. Pines, J. Waugh, PRL, 24,218 (1970)

  2. A controller based on Optimal Type-2 Fuzzy Logic: systematic design, optimization and real-time implementation.

    PubMed

    Fayek, H M; Elamvazuthi, I; Perumal, N; Venkatesh, B

    2014-09-01

    A computationally-efficient systematic procedure to design an Optimal Type-2 Fuzzy Logic Controller (OT2FLC) is proposed. The main scheme is to optimize the gains of the controller using Particle Swarm Optimization (PSO), then optimize only two parameters per type-2 membership function using Genetic Algorithm (GA). The proposed OT2FLC was implemented in real-time to control the position of a DC servomotor, which is part of a robotic arm. The performance judgments were carried out based on the Integral Absolute Error (IAE), as well as the computational cost. Various type-2 defuzzification methods were investigated in real-time. A comparative analysis with an Optimal Type-1 Fuzzy Logic Controller (OT1FLC) and a PI controller, demonstrated OT2FLC׳s superiority; which is evident in handling uncertainty and imprecision induced in the system by means of noise and disturbances.

  3. Local structural modeling for implementation of optimal active damping

    NASA Astrophysics Data System (ADS)

    Blaurock, Carl A.; Miller, David W.

    1993-09-01

    Local controllers are good candidates for active control of flexible structures. Local control generally consists of low order, frequency benign compensators using collocated hardware. Positive real compensators and plant transfer functions ensure that stability margins and performance robustness are high. The typical design consists of an experimentally chosen gain on a fixed form controller such as rate feedback. The resulting compensator performs some combination of damping (dissipating energy) and structural modification (changing the energy flow paths). Recent research into structural impedance matching has shown how to optimize dissipation based on the local behavior of the structure. This paper investigates the possibility of improving performance by influencing global energy flow, using local controllers designed using a global performance metric.

  4. Selected photographic techniques, a compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A selection has been made of methods, devices, and techniques developed in the field of photography during implementation of space and nuclear research projects. These items include many adaptations, variations, and modifications to standard hardware and practice, and should prove interesting to both amateur and professional photographers and photographic technicians. This compilation is divided into two sections. The first section presents techniques and devices that have been found useful in making photolab work simpler, more productive, and higher in quality. Section two deals with modifications to and special applications for existing photographic equipment.

  5. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  6. Evaluation of a multicore-optimized implementation for tomographic reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far.

  7. An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments

    NASA Astrophysics Data System (ADS)

    Kang, Yuncheol; Prabhu, Vittal

    The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.

  8. Compiler Acceptance Criteria Guidebook

    DTIC Science & Technology

    1977-05-01

    rrograms generated by a compiler coupled with the expected life of the compiled program (number of times to be used) can often make ? 1 -6 : ! this aspect...concern are! " CPU time per statement or program * core usaae " 1 /0 access time " wait or dead ti-e " disk storage • tape drive mounts In large...left to the specification agency discretion. A prime example is the OS 370 P1eP system. 1 1 -7 0 Level of Expertise Another often neglected cost item is

  9. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  10. FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar

    NASA Astrophysics Data System (ADS)

    Azim, Noor ul; Jun, Wang

    2016-11-01

    Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.

  11. Metallurgical processing: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The items in this compilation, all relating to metallurgical processing, are presented in two sections. The first section includes processes which are general in scope and applicable to a variety of metals or alloys. The second describes the processes that concern specific metals and their alloys.

  12. Implementation of pattern-specific illumination pupil optimization on Step & Scan systems

    NASA Astrophysics Data System (ADS)

    Engelen, Andre; Socha, Robert J.; Hendrickx, Eric; Scheepers, Wieger; Nowak, Frank; Van Dam, Marco; Liebchen, Armin; Faas, Denis A.

    2004-05-01

    Step&Scan systems are pushed towards low k1 applications. Contrast enhancement techniques are crucial for successful implementation of these applications in a production environment. A NA - sigma - illumination mode optimizer and a contrast-based optimization algorithm are implemented in LithoCruiser in order to optimize illumination setting and illumination pupil for a specific repetitive pattern. Calculated illumination pupils have been realized using Diffractive Optical Elements (DOE), which are supported by ASML's AERIAL II illuminator. The qualification of the illumination pupil is done using inline metrology on the ASML Step & Scan system. This paper describes the process of pattern specific illumination optimization for a given mask. Multiple examples will be used to demonstrate the advantage of using non-standard illumination pupils.

  13. Optical implementations of the optimal phase-covariant quantum cloning machine

    SciTech Connect

    Fiurasek, Jaromir

    2003-05-01

    We propose two simple implementations of the optimal symmetric 1{yields}2 phase-covariant cloning machine for qubits. The first scheme is designed for qubits encoded into polarization states of photons and it involves a mixing of two photons on an unbalanced beam splitter. This scheme is probabilistic and the cloning succeeds with the probability 1/3. In the second setup, the qubits are represented by the states of Rydberg atoms and the cloning is accomplished by the resonant interaction of the atoms with a microwave field confined in a high-Q cavity. This latter approach allows for deterministic implementation of the optimal cloning transformation.

  14. Teleportation scheme implementing the universal optimal quantum cloning machine and the universal NOT gate.

    PubMed

    Ricci, M; Sciarrino, F; Sias, C; De Martini, F

    2004-01-30

    By a significant modification of the standard protocol of quantum state teleportation, two processes "forbidden" by quantum mechanics in their exact form, the universal NOT gate and the universal optimal quantum cloning machine, have been implemented contextually and optimally by a fully linear method. In particular, the first experimental demonstration of the tele-UNOT gate, a novel quantum information protocol, has been reported. The experimental results are found in full agreement with theory.

  15. Optimization of Optical Systems Using Genetic Algorithms: a Comparison Among Different Implementations of The Algorithm

    NASA Astrophysics Data System (ADS)

    López-Medina, Mario E.; Vázquez-Montiel, Sergio; Herrera-Vázquez, Joel

    2008-04-01

    The Genetic Algorithms, GAs, are a method of global optimization that we use in the stage of optimization in the design of optical systems. In the case of optical design and optimization, the efficiency and convergence speed of GAs are related with merit function, crossover operator, and mutation operator. In this study we present a comparison between several genetic algorithms implementations using different optical systems, like achromatic cemented doublet, air spaced doublet and telescopes. We do the comparison varying the type of design parameters and the number of parameters to be optimized. We also implement the GAs using discreet parameters with binary chains and with continuous parameter using real numbers in the chromosome; analyzing the differences in the time taken to find the solution and the precision in the results between discreet and continuous parameters. Additionally, we use different merit function to optimize the same optical system. We present the obtained results in tables, graphics and a detailed example; and of the comparison we conclude which is the best way to implement GAs for design and optimization optical system. The programs developed for this work were made using the C programming language and OSLO for the simulation of the optical systems.

  16. An implementation of particle swarm optimization to evaluate optimal under-voltage load shedding in competitive electricity markets

    NASA Astrophysics Data System (ADS)

    Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.

    2013-11-01

    Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.

  17. Parallel Incremental Compilation

    DTIC Science & Technology

    1990-06-01

    Seshadri et al., 1988] are taking a more wholistic approach to concurrency in compilation. In their model, the program is divided up along natural...symbol table references. Deleted and inserted references are put on the symbol’s reference list. Special care must be taken to maintain the invariants of...design as well. The primary design principle is the top-down division of the translation task into a number of much simpler mutually sequential

  18. Valve technology: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A technical compilation on the types, applications and modifications to certain valves is presented. Data cover the following: (1) valves that feature automatic response to stimuli (thermal, electrical, fluid pressure, etc.), (2) modified valves changed by redesign of components to increase initial design effectiveness or give the item versatility beyond its basic design capability, and (3) special purpose valves with limited application as presented, but lending themselves to other uses with minor changes.

  19. Metallurgy: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A compilation on the technical uses of various metallurgical processes is presented. Descriptions are given of the mechanical properties of various alloys, ranging from TAZ-813 at 2200 F to investment cast alloy 718 at -320 F. Methods are also described for analyzing some of the constituents of various alloys from optical properties of carbide precipitates in Rene 41 to X-ray spectrographic analysis of the manganese content of high chromium steels.

  20. An optimized ultrasound digital beamformer with dynamic focusing implemented on FPGA.

    PubMed

    Almekkawy, Mohamed; Xu, Jingwei; Chirala, Mohan

    2014-01-01

    We present a resource-optimized dynamic digital beamformer for an ultrasound system based on a field-programmable gate array (FPGA). A comprehensive 64-channel receive beamformer with full dynamic focusing is embedded in the Altera Arria V FPGA chip. To improve spatial and contrast resolution, full dynamic beamforming is implemented by a novel method with resource optimization. This was conceived using the implementation of the delay summation through a bulk (coarse) delay and fractional (fine) delay. The sampling frequency is 40 MHz and the beamformer includes a 240 MHz polyphase filter that enhances the temporal resolution of the system while relaxing the Analog-to-Digital converter (ADC) bandwidth requirement. The results indicate that our 64-channel dynamic beamformer architecture is amenable for a low power FPGA-based implementation in a portable ultrasound system.

  1. HAL/S-360 compiler test activity report

    NASA Technical Reports Server (NTRS)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  2. The Study of Cross-layer Optimization for Wireless Rechargeable Sensor Networks Implemented in Coal Mines

    PubMed Central

    Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting

    2016-01-01

    Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes’ placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper. PMID:26828500

  3. The Study of Cross-layer Optimization for Wireless Rechargeable Sensor Networks Implemented in Coal Mines.

    PubMed

    Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting

    2016-01-28

    Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes' placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper.

  4. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  5. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  6. Proof-Carrying Code with Correct Compilers

    NASA Technical Reports Server (NTRS)

    Appel, Andrew W.

    2009-01-01

    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  7. Atomic mass compilation 2012

    SciTech Connect

    Pfeiffer, B.; Venkataramaniah, K.; Czok, U.; Scheidenberger, C.

    2014-03-15

    Atomic mass reflects the total binding energy of all nucleons in an atomic nucleus. Compilations and evaluations of atomic masses and derived quantities, such as neutron or proton separation energies, are indispensable tools for research and applications. In the last decade, the field has evolved rapidly after the advent of new production and measuring techniques for stable and unstable nuclei resulting in substantial ameliorations concerning the body of data and their precision. Here, we present a compilation of atomic masses comprising the data from the evaluation of 2003 as well as the results of new measurements performed. The relevant literature in refereed journals and reports as far as available, was scanned for the period beginning 2003 up to and including April 2012. Overall, 5750 new data points have been collected. Recommended values for the relative atomic masses have been derived and a comparison with the 2003 Atomic Mass Evaluation has been performed. This work has been carried out in collaboration with and as a contribution to the European Nuclear Structure and Decay Data Network of Evaluations.

  8. Optimizing local protocols for implementing bipartite nonlocal unitary gates using prior entanglement and classical communication

    SciTech Connect

    Cohen, Scott M.

    2010-06-15

    We present a method of optimizing recently designed protocols for implementing an arbitrary nonlocal unitary gate acting on a bipartite system. These protocols use only local operations and classical communication with the assistance of entanglement, and they are deterministic while also being 'one-shot', in that they use only one copy of an entangled resource state. The optimization minimizes the amount of entanglement needed, and also the amount of classical communication, and it is often the case that less of each of these resources is needed than with an alternative protocol using two-way teleportation.

  9. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    Jacob, J. Augustin; Kumar, N. Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  10. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation.

  11. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  12. Implementation of a multiblock sensitivity analysis method in numerical aerodynamic shape optimization

    NASA Technical Reports Server (NTRS)

    Lacasse, James M.

    1995-01-01

    A multiblock sensitivity analysis method is applied in a numerical aerodynamic shape optimization technique. The Sensitivity Analysis Domain Decomposition (SADD) scheme which is implemented in this study was developed to reduce the computer memory requirements resulting from the aerodynamic sensitivity analysis equations. Discrete sensitivity analysis offers the ability to compute quasi-analytical derivatives in a more efficient manner than traditional finite-difference methods, which tend to be computationally expensive and prone to inaccuracies. The direct optimization procedure couples CFD analysis based on the two-dimensional thin-layer Navier-Stokes equations with a gradient-based numerical optimization technique. The linking mechanism is the sensitivity equation derived from the CFD discretized flow equations, recast in adjoint form, and solved using direct matrix inversion techniques. This investigation is performed to demonstrate an aerodynamic shape optimization technique on a multiblock domain and its applicability to complex geometries. The objectives are accomplished by shape optimizing two aerodynamic configurations. First, the shape optimization of a transonic airfoil is performed to investigate the behavior of the method in highly nonlinear flows and the effect of different grid blocking strategies on the procedure. Secondly, shape optimization of a two-element configuration in subsonic flow is completed. Cases are presented for this configuration to demonstrate the effect of simultaneously reshaping interfering elements. The aerodynamic shape optimization is shown to produce supercritical type airfoils in the transonic flow from an initially symmetric airfoil. Multiblocking effects the path of optimization while providing similar results at the conclusion. Simultaneous reshaping of elements is shown to be more effective than individual element reshaping due to the inclusion of mutual interference effects.

  13. Minimum Flying Qualities. Volume 3. Program CC’s Implementation of the Human Optimal Control Model

    DTIC Science & Technology

    1990-01-01

    BTIC FILE COPY WRDC-TR-89-3125 Volume III Lfl MINIMUM FLYING QUALITIES _ Volume III: Program CC’s Implementation of the Human Optimal Control Model...technical report has been reviewed and is approved for publica- tion. CAPT MARK J. DETROIT, USAF DAVID K. BOWSER, Chief Control Dynamics Branch... Controls Dynamics Branch Flight Control Division Flight Control Division FOR THE CO.’,ANDER H. MAX DAVIS, Assistant for Research and Technology Flight

  14. Galileo Outreach Compilation

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This NASA JPL (Jet Propulsion Laboratory) video production is a compilation of the best short movies and computer simulation/animations of the Galileo spacecraft's journey to Jupiter. A limited number of actual shots are presented of Jupiter and its natural satellites. Most of the video is comprised of computer animations of the spacecraft's trajectory, encounters with the Galilean satellites Io, Europa and Ganymede, as well as their atmospheric and surface structures. Computer animations of plasma wave observations of Ganymede's magnetosphere, a surface gravity map of Io, the Galileo/Io flyby, the Galileo space probe orbit insertion around Jupiter, and actual shots of Jupiter's Great Red Spot are presented. Panoramic views of our Earth (from orbit) and moon (from orbit) as seen from Galileo as well as actual footage of the Space Shuttle/Galileo liftoff and Galileo's space probe separation are also included.

  15. MPEG-2/4 Low-Complexity Advanced Audio Coding Optimization and Implementation on DSP

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Huang, Hao-Yu; Chen, Yen-Lin; Peng, Hsin-Yuan; Huang, Jia-Hsiung

    This study presents several optimization approaches for the MPEG-2/4 Audio Advanced Coding (AAC) Low Complexity (LC) encoding and decoding processes. Considering the power consumption and the peripherals required for consumer electronics, this study adopts the TI OMAP5912 platform for portable devices. An important optimization issue for implementing AAC codec on embedded and mobile devices is to reduce computational complexity and memory consumption. Due to power saving issues, most embedded and mobile systems can only provide very limited computational power and memory resources for the coding process. As a result, modifying and simplifying only one or two blocks is insufficient for optimizing the AAC encoder and enabling it to work well on embedded systems. It is therefore necessary to enhance the computational efficiency of other important modules in the encoding algorithm. This study focuses on optimizing the Temporal Noise Shaping (TNS), Mid/Side (M/S) Stereo, Modified Discrete Cosine Transform (MDCT) and Inverse Quantization (IQ) modules in the encoder and decoder. Furthermore, we also propose an efficient memory reduction approach that provides a satisfactory balance between the reduction of memory usage and the expansion of the encoded files. In the proposed design, both the AAC encoder and decoder are built with fixed-point arithmetic operations and implemented on a DSP processor combined with an ARM-core for peripheral controlling. Experimental results demonstrate that the proposed AAC codec is computationally effective, has low memory consumption, and is suitable for low-cost embedded and mobile applications.

  16. Optimization and implementation of the integer wavelet transform for image coding.

    PubMed

    Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella

    2002-01-01

    This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.

  17. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    SciTech Connect

    Tian, Zhen E-mail: Xun.Jia@UTSouthwestern.edu Folkerts, Michael; Tan, Jun; Jia, Xun E-mail: Xun.Jia@UTSouthwestern.edu Jiang, Steve B. E-mail: Xun.Jia@UTSouthwestern.edu; Peng, Fei

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  18. Trident: An FPGA Compiler Framework for Floating-Point Algorithms.

    SciTech Connect

    Tripp J. L.; Peterson, K. D.; Poznanovic, J. D.; Ahrens, C. M.; Gokhale, M.

    2005-01-01

    Trident is a compiler for floating point algorithms written in C, producing circuits in reconfigurable logic that exploit the parallelism available in the input description. Trident automatically extracts parallelism and pipelines loop bodies using conventional compiler optimizations and scheduling techniques. Trident also provides an open framework for experimentation, analysis, and optimization of floating point algorithms on FPGAs and the flexibility to easily integrate custom floating point libraries.

  19. A Portable Compiler for the Language C

    DTIC Science & Technology

    1975-05-01

    optimize as desired; this solution rs more likely to be acceptable as a compilation technique. A third solution will be advocated in this paper. The...OlO00,DU STA •F" ♦♦ac: (auto|stat|indir«ctJ: -% epq ɘ,0.0.«’F) STQ .TEMP LDA .TEMP TSX5 .CTOA EAX5 l^VL T

  20. Voyager Outreach Compilation

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This NASA JPL (Jet Propulsion Laboratory) video presents a collection of the best videos that have been published of the Voyager mission. Computer animation/simulations comprise the largest portion of the video and include outer planetary magnetic fields, outer planetary lunar surfaces, and the Voyager spacecraft trajectory. Voyager visited the four outer planets: Jupiter, Saturn, Uranus, and Neptune. The video contains some live shots of Jupiter (actual), the Earth's moon (from orbit), Saturn (actual), Neptune (actual) and Uranus (actual), but is mainly comprised of computer animations of these planets and their moons. Some of the individual short videos that are compiled are entitled: The Solar System; Voyage to the Outer Planets; A Tour of the Solar System; and the Neptune Encounter. Computerized simulations of Viewing Neptune from Triton, Diving over Neptune to Meet Triton, and Catching Triton in its Retrograde Orbit are included. Several animations of Neptune's atmosphere, rotation and weather features as well as significant discussion of the planet's natural satellites are also presented.

  1. Voyager Outreach Compilation

    NASA Astrophysics Data System (ADS)

    1998-09-01

    This NASA JPL (Jet Propulsion Laboratory) video presents a collection of the best videos that have been published of the Voyager mission. Computer animation~ulations comprise the largest portion of the video and include outer planetary magnetic fields, outer planetary lunar surfaces, and the Voyager spacecraft trajectory. Voyager visited the four outer planets: Jupiter, Saturn, Uranus, and Neptune. The video contains some live shots of Jupiter (actual), the Earth's moon (from orbit), Saturn (actual), Neptune (actual) and Uranus (actual), but is mainly comprised of computer animations of these planets and their moons. Some of the individual short videos that are compiled are entitled: The Solar System; Voyage to the Outer Planets; A Tour of the Solar System; and the Neptune Encounter. Computerized simulations of Viewing Neptune from Triton, Diving over Neptune to Meet Triton, and Catching Triton in its Retrograde Orbit are included. Several animations of Neptune's atmosphere, rotation and weather features as well as significant discussion of the planet's natural satellites are also presented.

  2. Implementation of an ANCF beam finite element for dynamic response optimization of elastic manipulators

    NASA Astrophysics Data System (ADS)

    Vohar, B.; Kegl, M.; Ren, Z.

    2008-12-01

    Theoretical and practical aspects of an absolute nodal coordinate formulation (ANCF) beam finite element implementation are considered in the context of dynamic transient response optimization of elastic manipulators. The proposed implementation is based on the introduction of new nodal degrees of freedom, which is achieved by an adequate nonlinear mapping between the original and new degrees of freedom. This approach preserves the mechanical properties of the ANCF beam, but converts it into a conventional finite element so that its nodal degrees of freedom are initially always equal to zero and never depend explicitly on the design variables. Consequently, the sensitivity analysis formulas can be derived in the usual manner, except that the introduced nonlinear mapping has to be taken into account. Moreover, the adjusted element can also be incorporated into general finite element analysis and optimization software in the conventional way. The introduced design variables are related to the cross-section of the beam, to the shape of the (possibly) skeletal structure of the manipulator and to the drive functions. The layered cross-section approach and the design element technique are utilized to parameterize the shape of individual elements and the whole structure. A family of implicit time integration methods is adopted for the response and sensitivity analysis. Based on this assumption, the corresponding sensitivity formulas are derived. Two numerical examples illustrate the performance of the proposed element implementation.

  3. The Optimization of Automatically Generated Compilers.

    DTIC Science & Technology

    1987-01-01

    record, having one named field for each attribute. During the parse, the structure tree nodes are dynamically allocated and strung together according to...ibut 𔃻~~~~~~~ i 2,-i]l jti [2],--+i[1 ]./1 ,j, V.--i ]ii[2]. , ,/2]i --i.il ] -+i [2].$,4-2/[2], c r,’ ---.sitd, nd--"e s,, tdh at --+i [l].v ’i[]v...computation (at TWA runtime) of context information to determine that this visit sequence can actually be used. Moreover, the dynamic nature of this decision

  4. Optimal Pain Assessment in Pediatric Rehabilitation: Implementation of a Nursing Guideline.

    PubMed

    Kingsnorth, Shauna; Joachimides, Nick; Krog, Kim; Davies, Barbara; Higuchi, Kathryn Smith

    2015-12-01

    In Ontario, Canada, the Registered Nurses' Association promotes a Best Practice Spotlight Organization initiative to enhance evidence-based practice. Qualifying organizations are required to implement strategies, evaluate outcomes, and sustain practices aligned with nursing clinical practice guidelines. This study reports on the development and evaluation of a multifaceted implementation strategy to support adoption of a nursing clinical practice guideline on the assessment and management of acute pain in a pediatric rehabilitation and complex continuing care hospital. Multiple approaches were employed to influence behavior, attitudes, and awareness around optimal pain practice (e.g., instructional resources, electronic reminders, audits, and feedback). Four measures were introduced to assess pain in communicating and noncommunicating children as part of a campaign to treat pain as the fifth vital sign. A prospective repeated measures design examined survey and audit data to assess practice aligned with the guideline. The Knowledge and Attitudes Survey (KNAS) was adapted to ensure relevance to the local practice setting and was assessed before and after nurses' participation in three education modules. Audit data included client demographics and pain scores assessed annually over a 3-year window. A final sample of 69 nurses (78% response rate) provided pre-/post-survey data. A total of 108 pediatric surgical clients (younger than 19 years) contributed audit data across the three collection cycles. Significant improvements in nurses' knowledge, attitudes, and behaviors related to optimal pain care for children with disabilities were noted following adoption of the pain clinical practice guideline. Targeted guideline implementation strategies are central to supporting optimal pain practice.

  5. Implementation of natural frequency analysis and optimality criterion design. [computer technique for structural analysis

    NASA Technical Reports Server (NTRS)

    Levy, R.; Chai, K.

    1978-01-01

    A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.

  6. Implementation of transmission functions for an optimized three-terminal quantum dot heat engine

    NASA Astrophysics Data System (ADS)

    Schiegg, Christian H.; Dzierzawa, Michael; Eckern, Ulrich

    2017-03-01

    We consider two modifications of a recently proposed three-terminal quantum dot heat engine. First, we investigate the necessity of the thermalization assumption, namely that electrons are always thermalized by inelastic processes when traveling across the cavity where the heat is supplied. Second, we analyze various arrangements of tunneling-coupled quantum dots in order to implement a transmission function that is superior to the Lorentzian transmission function of a single quantum dot. We show that the maximum power of the heat engine can be improved by about a factor of two, even for a small number of dots, by choosing an optimal structure.

  7. Implementation of transmission functions for an optimized three-terminal quantum dot heat engine.

    PubMed

    Schiegg, Christian H; Dzierzawa, Michael; Eckern, Ulrich

    2017-03-01

    We consider two modifications of a recently proposed three-terminal quantum dot heat engine. First, we investigate the necessity of the thermalization assumption, namely that electrons are always thermalized by inelastic processes when traveling across the cavity where the heat is supplied. Second, we analyze various arrangements of tunneling-coupled quantum dots in order to implement a transmission function that is superior to the Lorentzian transmission function of a single quantum dot. We show that the maximum power of the heat engine can be improved by about a factor of two, even for a small number of dots, by choosing an optimal structure.

  8. The Optimize Heart Failure Care Program: Initial lessons from global implementation.

    PubMed

    Cowie, Martin R; Lopatin, Yuri M; Saldarriaga, Clara; Fonseca, Cândida; Sim, David; Magaña, Jose Antonio; Albuquerque, Denilson; Trivi, Marcelo; Moncada, Gustavo; González Castillo, Baldomero A; Sánchez, Mario Osvaldo Speranza; Chung, Edward

    2017-02-12

    Hospitalization for heart failure (HF) places a major burden on healthcare services worldwide, and is a strong predictor of increased mortality especially in the first three months after discharge. Though undesirable, hospitalization is an opportunity to optimize HF therapy and advise clinicians and patients about the importance of continued adherence to HF medication and regular monitoring. The Optimize Heart Failure Care Program (www.optimize-hf.com), which has been implemented in 45 countries, is designed to improve outcomes following HF hospitalization through inexpensive initiatives to improve prescription of appropriate drug therapies, patient education and engagement, and post-discharge planning. It includes best practice clinical protocols for local adaptation, pre- and post-discharge checklists, and 'My HF Passport', a printed and smart phone application to improve patient understanding of HF and encourage involvement in care and treatment adherence. Early experience of the Program suggests that factors leading to successful implementation include support from HF specialists or 'local leaders', regular educational meetings for participating healthcare professionals, multidisciplinary collaboration, and full integration of pre- and post-hospital discharge checklists across care services. The Program is helping to raise awareness of HF and generate useful data on current practice. It is showing how good evidence-based care can be achieved through the use of simple clinician and patient-focused tools. Preliminary results suggest that optimization of HF pharmacological therapy is achievable through the Program, with little new investment. Further data collection will lead to a greater understanding of the impact of the Program on HF care and key indicators of success.

  9. Sequential Principal Component Analysis -An Optimal and Hardware-Implementable Transform for Image Compression

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2009-01-01

    This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.

  10. Development and implementation of rotorcraft preliminary design methodology using multidisciplinary design optimization

    NASA Astrophysics Data System (ADS)

    Khalid, Adeel Syed

    Rotorcraft's evolution has lagged behind that of fixed-wing aircraft. One of the reasons for this gap is the absence of a formal methodology to accomplish a complete conceptual and preliminary design. Traditional rotorcraft methodologies are not only time consuming and expensive but also yield sub-optimal designs. Rotorcraft design is an excellent example of a multidisciplinary complex environment where several interdependent disciplines are involved. A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. The design methodology consists of the product and process development cycles. In the product development loop, all the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design

  11. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  12. Artificial immune algorithm implementation for optimized multi-axis sculptured surface CNC machining

    NASA Astrophysics Data System (ADS)

    Fountas, N. A.; Kechagias, J. D.; Vaxevanidis, N. M.

    2016-11-01

    This paper presents the results obtained by the implementation of an artificial immune algorithm to optimize standard multi-axis tool-paths applied to machine free-form surfaces. The investigation for its applicability was based on a full factorial experimental design addressing the two additional axes for tool inclination as independent variables whilst a multi-objective response was formulated by taking into consideration surface deviation and tool path time; objectives assessed directly from computer-aided manufacturing environment A standard sculptured part was developed by scratch considering its benchmark specifications and a cutting-edge surface machining tool-path was applied to study the effects of the pattern formulated when dynamically inclining a toroidal end-mill and guiding it towards the feed direction under fixed lead and tilt inclination angles. The results obtained form the series of the experiments were used for the fitness function creation the algorithm was about to sequentially evaluate. It was found that the artificial immune algorithm employed has the ability of attaining optimal values for inclination angles facilitating thus the complexity of such manufacturing process and ensuring full potentials in multi-axis machining modelling operations for producing enhanced CNC manufacturing programs. Results suggested that the proposed algorithm implementation may reduce the mean experimental objective value to 51.5%

  13. Optimization models and techniques for implementation and pricing of electricity markets

    NASA Astrophysics Data System (ADS)

    Madrigal Martinez, Marcelino

    Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and

  14. Computer implementation of analysis and optimization procedures for control-structure interaction problems

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Park, K. C.

    1990-01-01

    Implementation aspects of control-structure interaction analysis and optimization by the staggered use of single-discipline analysis modules are discussed. The single-discipline modules include structural analysis, controller synthesis and optimization. The software modularity is maintained by employing a partitioned control-structure interaction analysis procedure, thus avoiding the need for embedding the single-discipline modules into a monolithic program. A software testbed has been constructed as a stand-alone analysis and optimization program and tested for its versatility and software modularity by applying it to the dynamic analysis and preliminary design of a prototype Earth Pointing Satellite. Experience with the in-core testbed program so far demonstrates that the testbed is efficient, preserves software modularity, and enables the analyst to choose a different set of algorithms, control strategies and design parameters via user software interfaces. Thus, the present software architecture is recommended for adoption by control-structure interaction analysts as a preliminary analysis and design tool.

  15. Formulation for a practical implementation of electromagnetic induction coils optimized using stream functions

    NASA Astrophysics Data System (ADS)

    Reed, Mark A.; Scott, Waymond R.

    2016-05-01

    Continuous-wave (CW) electromagnetic induction (EMI) systems used for subsurface sensing typically employ separate transmit and receive coils placed in close proximity. The closeness of the coils is desirable for both packaging and object pinpointing; however, the coils must have as little mutual coupling as possible. Otherwise, the signal from the transmit coil will couple into the receive coil, making target detection difficult or impossible. Additionally, mineralized soil can be a significant problem when attempting to detect small amounts of metal because the soil effectively couples the transmit and receive coils. Optimization of wire coils to improve their performance is difficult but can be made possible through a stream-function representation and the use of partially convex forms. Examples of such methods have been presented previously, but these methods did not account for certain practical issues with coil implementation. In this paper, the power constraint introduced into the optimization routine is modified so that it does not penalize areas of high current. It does this by representing the coils as plates carrying surface currents and adjusting the sheet resistance to be inversely proportional to the current, which is a good approximation for a wire-wound coil. Example coils are then optimized for minimum mutual coupling, maximum sensitivity, and minimum soil response at a given height with both the earlier, constant sheet resistance and the new representation. The two sets of coils are compared both to each other and other common coil types to show the method's viability.

  16. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  17. MOBILE. A MOBIDIC COBOL COMPILER

    DTIC Science & Technology

    formats, (3) Data design table (DDT) (4) Run 8 table formats (5) Macro instructions and related table formats (6) COBOL compiler output listings (7) Qualification task in Run 1.3 and a description of the Data Name List (DNLA).

  18. Welding and joining: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation is presented of NASA-developed technology in welding and joining. Topics discussed include welding equipment, techniques in welding, general bonding, joining techniques, and clamps and holding fixtures.

  19. Final report: Compiled MPI. Cost-Effective Exascale Application Development

    SciTech Connect

    Gropp, William Douglas

    2015-12-21

    This is the final report on Compiled MPI: Cost-Effective Exascale Application Development, and summarizes the results under this project. The project investigated runtime enviroments that improve the performance of MPI (Message-Passing Interface) programs; work at Illinois in the last period of this project looked at optimizing data access optimizations expressed with MPI datatypes.

  20. Quantum compiling with low overhead

    NASA Astrophysics Data System (ADS)

    Duclos-Cianci, Guillaume; Poulin, David

    2014-03-01

    I will present a scheme to compile complex quantum gates that uses significantly fewer resources than existing schemes. In standard fault-tolerant protocols, a magic state is distilled from noisy resources, and copies of this magic state are then assembled into produced complex gates using the Solovay-Kitaev theorem or variants thereof. In our approach, we instead directly distill magic states associated to complex gates from noisy resources, leading to a reduction of the compiling overhead of several orders of magnitude.

  1. Verified Separate Compilation for C

    DTIC Science & Technology

    2015-06-01

    as register spilling, that introduce compiler-managed (private) memory regions into function stack frames, and C???s stack-allocated addressable...spilling, that introduce compiler-managed (private) memory regions into function stack frames, and C’s stack-allocated addressable local variables...14 1.4.5 The Bleeding Edge . . . . . . . . . . . . . . . . . . . . . 14 2 The CompCert Memory Model 17 2.1 Memory Model

  2. Learning from colleagues about healthcare IT implementation and optimization: lessons from a medical informatics listserv.

    PubMed

    Adams, Martha B; Kaplan, Bonnie; Sobko, Heather J; Kuziemsky, Craig; Ravvaz, Kourosh; Koppel, Ross

    2015-01-01

    Communication among medical informatics communities can suffer from fragmentation across multiple forums, disciplines, and subdisciplines; variation among journals, vocabularies and ontologies; cost and distance. Online communities help overcome these obstacles, but may become onerous when listservs are flooded with cross-postings. Rich and relevant content may be ignored. The American Medical Informatics Association successfully addressed these problems when it created a virtual meeting place by merging the membership of four working groups into a single listserv known as the "Implementation and Optimization Forum." A communication explosion ensued, with thousands of interchanges, hundreds of topics, commentaries from "notables," neophytes, and students--many from different disciplines, countries, traditions. We discuss the listserv's creation, illustrate its benefits, and examine its lessons for others. We use examples from the lively, creative, deep, and occasionally conflicting discussions of user experiences--interchanges about medication reconciliation, open source strategies, nursing, ethics, system integration, and patient photos in the EMR--all enhancing knowledge, collegiality, and collaboration.

  3. Onboard optimized hardware implementation of JPEG-LS encoder based on FPGA

    NASA Astrophysics Data System (ADS)

    Wei, Wen; Lei, Jie; Li, Yunsong

    2012-10-01

    A novel hardware implementation of JPEG-LS Encoder based on FPGA is introduced in this paper. Using a look-ahead technique, the critical delay paths of LOCO-I algorithm, such as feedback-loop circuit of parameters updating, are improved. Then an optimized architecture of JPEG-LS Encoder is proposed. Especially, run-mode encode process of JPEG-LS is covered in the architecture as well. Experiment results show that the circuit complexity and memory consumption of the proposed structure are much lower, while the data processing speed is much higher than some other available structures. So it is very suited for applying high-speed lossless compression of satellite sensing image onboard.

  4. Ada compiler validation summary report: Cray Research, Inc. , Cray Ada Compiler, Version 1. 1 Cray X-MP (Host Target), 890523W1. 10080

    SciTech Connect

    Not Available

    1989-05-23

    This Validation Summary Report describes the extent to which a specific Ada compiler conforms to the Ada Standard, ANSI/MIL-STD-1815A. The report explains all technical terms used within it and thoroughly reports the results of testing this compiler using the Ada Compiler Validation Capability. An Ada compiler must be implemented according to the Ada Standard, and any implementation-dependent features must conform to the requirements of the Ada Standard. The Ada Standard must be implemented in its entirety, and nothing can be implemented that is not in the Standard. Even though all validated Ada compilers conform to the Ada Standard, it must be understood that some differences do exist between implementations. The Ada Standard permits some implementation dependencies - for example, the maximum length of identifiers or the maximum values of integer types. Other differences between compilers result from the characteristics of particular operating systems, hardware, or implementation strategies. All the dependencies observed during the process of testing this compiler are given in this report. The information in this report is derived from the test results produced during validation testing. The validation process includes submitting a suite of standardized tests, the ACVC, as inputs to an Ada compiler and evaluating the results.

  5. Ada compiler validation summary report. Cray Research, Inc. , Cray Ada Compiler, Version 1. 1, Cray-2, (Host Target), 890523W1. 10081

    SciTech Connect

    Not Available

    1989-05-23

    This Validation Summary Report describes the extent to which a specific Ada compiler conforms to the Ada Standard, ANSI-MIL-STD-1815A. The report explains all technical terms used within it and thoroughly reports the results of testing this compiler using the Ada Compiler Validation Capability. An Ada compiler must be implemented according to the Ada Standard, and any implementation-dependent features must conform to the requirements of the Ada Standard. The Ada Standard must be implemented in its entirety, and nothing can be implemented that is not in the Standard. Even though all validated Ada compilers conform to the Ada Standard, it must be understood that some differences do exist between implementations. The Ada Standard permits some implementation dependencies - for example, the maximum length of identifiers or the maximum values of integer types. Other differences between compilers result from the characteristics of particular operating systems, hardware, or implementation strategies. All the dependencies observed during the process of testing this compiler are given in this report. The information in this report is derived from the test results produced during validation testing. The validation process includes submitting a suite of standardized tests, the ACVC, as inputs to an Ada compiler and evaluating the results.

  6. On the implementation of an automated acoustic output optimization algorithm for subharmonic aided pressure estimation

    PubMed Central

    Dave, J. K.; Halldorsdottir, V. G.; Eisenbrey, J. R.; Merton, D. A.; Liu, J. B.; Machado, P.; Zhao, H.; Park, S.; Dianis, S.; Chalek, C. L.; Thomenius, K. E.; Brown, D. B.; Forsberg, F.

    2013-01-01

    Incident acoustic output (IAO) dependent subharmonic signal amplitudes from ultrasound contrast agents can be categorized into occurrence, growth or saturation stages. Subharmonic aided pressure estimation (SHAPE) is a technique that utilizes growth stage subharmonic signal amplitudes for hydrostatic pressure estimation. In this study, we developed an automated IAO optimization algorithm to identify the IAO level eliciting growth stage subharmonic signals and also studied the effect of pulse length on SHAPE. This approach may help eliminate the problems of acquiring and analyzing the data offline at all IAO levels as was done in previous studies and thus, pave the way for real-time clinical pressure monitoring applications. The IAO optimization algorithm was implemented on a Logiq 9 (GE Healthcare, Milwaukee, WI) scanner interfaced with a computer. The optimization algorithm stepped the ultrasound scanner from 0 to 100 % IAO. A logistic equation fitting function was applied with the criterion of minimum least squared error between the fitted subharmonic amplitudes and the measured subharmonic amplitudes as a function of the IAO levels and the optimum IAO level was chosen corresponding to the inflection point calculated from the fitted data. The efficacy of the optimum IAO level was investigated for in vivo SHAPE to monitor portal vein (PV) pressures in 5 canines and was compared with the performance of IAO levels, below and above the optimum IAO level, for 4, 8 and 16 transmit cycles. The canines received a continuous infusion of Sonazoid microbubbles (1.5 μl/kg/min; GE Healthcare, Oslo, Norway). PV pressures were obtained using a surgically introduced pressure catheter (Millar Instruments, Inc., Houston, TX) and were recorded before and after increasing PV pressures. The experiments showed that optimum IAO levels for SHAPE in the canines ranged from 6 to 40 %. The best correlation between changes in PV pressures and in subharmonic amplitudes (r = -0.76; p = 0

  7. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  8. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  9. A Concept and Implementation of Optimized Operations of Airport Surface Traffic

    NASA Technical Reports Server (NTRS)

    Jung, Yoon C.; Hoang, Ty; Montoya, Justin; Gupta, Gautam; Malik, Waqar; Tobias, Leonard

    2010-01-01

    This paper presents a new concept of optimized surface operations at busy airports to improve the efficiency of taxi operations, as well as reduce environmental impacts. The suggested system architecture consists of the integration of two decoupled optimization algorithms. The Spot Release Planner provides sequence and timing advisories to tower controllers for releasing departure aircraft into the movement area to reduce taxi delay while achieving maximum throughput. The Runway Scheduler provides take-off sequence and arrival runway crossing sequence to the controllers to maximize the runway usage. The description of a prototype implementation of this integrated decision support tool for the airport control tower controllers is also provided. The prototype decision support tool was evaluated through a human-in-the-loop experiment, where both the Spot Release Planner and Runway Scheduler provided advisories to the Ground and Local Controllers. Initial results indicate the average number of stops made by each departure aircraft in the departure runway queue was reduced by more than half when the controllers were using the advisories, which resulted in reduced taxi times in the departure queue.

  10. Design and implementation of an automated compound management system in support of lead optimization.

    PubMed

    Quintero, Catherine; Kariv, Ilona

    2009-06-01

    To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.

  11. Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of Kalman filters.

    PubMed

    Denève, Sophie; Duhamel, Jean-René; Pouget, Alexandre

    2007-05-23

    Several behavioral experiments suggest that the nervous system uses an internal model of the dynamics of the body to implement a close approximation to a Kalman filter. This filter can be used to perform a variety of tasks nearly optimally, such as predicting the sensory consequence of motor action, integrating sensory and body posture signals, and computing motor commands. We propose that the neural implementation of this Kalman filter involves recurrent basis function networks with attractor dynamics, a kind of architecture that can be readily mapped onto cortical circuits. In such networks, the tuning curves to variables such as arm velocity are remarkably noninvariant in the sense that the amplitude and width of the tuning curves of a given neuron can vary greatly depending on other variables such as the position of the arm or the reliability of the sensory feedback. This property could explain some puzzling properties of tuning curves in the motor and premotor cortex, and it leads to several new predictions.

  12. TUNE: Compiler-Directed Automatic Performance Tuning

    SciTech Connect

    Hall, Mary

    2014-09-18

    This project has developed compiler-directed performance tuning technology targeting the Cray XT4 Jaguar system at Oak Ridge, which has multi-core Opteron nodes with SSE-3 SIMD extensions, and the Cray XE6 Hopper system at NERSC. To achieve this goal, we combined compiler technology for model-guided empirical optimization for memory hierarchies with SIMD code generation, which have been developed by the PIs over the past several years. We examined DOE Office of Science applications to identify performance bottlenecks and apply our system to computational kernels that operate on dense arrays. Our goal for this performance-tuning technology has been to yield hand-tuned levels of performance on DOE Office of Science computational kernels, while allowing application programmers to specify their computations at a high level without requiring manual optimization. Overall, we aim to make our technology for SIMD code generation and memory hierarchy optimization a crucial component of high-productivity Petaflops computing through a close collaboration with the scientists in national laboratories.

  13. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    NASA Astrophysics Data System (ADS)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  14. Supporting Binary Compatibility with Static Compilation

    DTIC Science & Technology

    2005-01-01

    compiling Java pro- grams are Just - In - Time (JIT) compilation (e.g. Sun Hotspot [29], Cacao [17], OpenJIT [24], shuJIT [28], vanilla Jalapeno [1]) and...plished using run-time compilation techniques: Just - in - time compilers generate code for classes at run-time. During the run-time compilation, a...2000. [15] K. Ishizaki, M. Kawahito, T. Yasue, H. Komatsu, and T. Nakatani. A study of devirtualization techniques for Java Just - In - Time compiler. In

  15. Implementation

    EPA Pesticide Factsheets

    Describes elements for the set of activities to ensure that control strategies are put into effect and that air quality goals and standards are fulfilled, permitting programs, and additional resources related to implementation under the Clean Air Act.

  16. Optimization of the Implementation of Renewable Resources in a Municipal Electric Utility in Arizona

    NASA Astrophysics Data System (ADS)

    Cadorin, Anthony

    A municipal electric utility in Mesa, Arizona with a peak load of approximately 85 megawatts (MW) was analyzed to determine how the implementation of renewable resources (both wind and solar) would affect the overall cost of energy purchased by the utility. The utility currently purchases all of its energy through long term energy supply contracts and does not own any generation assets and so optimization was achieved by minimizing the overall cost of energy while adhering to specific constraints on how much energy the utility could purchase from the short term energy market. Scenarios were analyzed for a five percent and a ten percent penetration of renewable energy in the years 2015 and 2025. Demand Side Management measures (through thermal storage in the City's district cooling system, electric vehicles, and customers' air conditioning improvements) were evaluated to determine if they would mitigate some of the cost increases that resulted from the addition of renewable resources. In the 2015 simulation, wind energy was less expensive than solar to integrate to the supply mix. When five percent of the utility's energy requirements in 2015 are met by wind, this caused a 3.59% increase in the overall cost of energy. When that five percent is met by solar in 2015, it is estimated to cause a 3.62% increase in the overall cost of energy. A mix of wind and solar in 2015 caused a lower increase in the overall cost of energy of 3.57%. At the ten percent implementation level in 2015, solar, wind, and a mix of solar and wind caused increases of 7.28%, 7.51% and 7.27% respectively in the overall cost of energy. In 2025, at the five percent implementation level, wind and solar caused increases in the overall cost of energy of 3.07% and 2.22% respectively. In 2025, at the ten percent implementation level, wind and solar caused increases in the overall cost of energy of 6.23% and 4.67% respectively. Demand Side Management reduced the overall cost of energy by approximately 0

  17. Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers

    NASA Technical Reports Server (NTRS)

    Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj

    1995-01-01

    The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.

  18. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    SciTech Connect

    Amarasinghe, Saman

    2015-03-27

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for different convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.

  19. Programming cells: towards an automated 'Genetic Compiler'.

    PubMed

    Clancy, Kevin; Voigt, Christopher A

    2010-08-01

    One of the visions of synthetic biology is to be able to program cells using a language that is similar to that used to program computers or robotics. For large genetic programs, keeping track of the DNA on the level of nucleotides becomes tedious and error prone, requiring a new generation of computer-aided design (CAD) software. To push the size of projects, it is important to abstract the designer from the process of part selection and optimization. The vision is to specify genetic programs in a higher-level language, which a genetic compiler could automatically convert into a DNA sequence. Steps towards this goal include: defining the semantics of the higher-level language, algorithms to select and assemble parts, and biophysical methods to link DNA sequence to function. These will be coupled to graphic design interfaces and simulation packages to aid in the prediction of program dynamics, optimize genes, and scan projects for errors.

  20. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing

  1. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-03-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  2. Obtaining correct compile results by absorbing mismatches between data types representations

    SciTech Connect

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2016-10-04

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  3. 1988 Bulletin compilation and index

    SciTech Connect

    1989-02-01

    This document is published to provide current information about the national program for managing spent fuel and high-level radioactive waste. This document is a compilation of issues from the 1988 calendar year. A table of contents and one index have been provided to assist in finding information.

  4. ACS Compiles Chemical Manpower Data

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1975

    1975-01-01

    Describes a publication designed to serve as a statistical base from which various groups can develop policy recommendations on chemical manpower. This new series will be the first official effort by the society to compile, correlate, and present all data relevant to the economic status of chemists. (Author/GS)

  5. Compiler validates units and dimensions

    NASA Technical Reports Server (NTRS)

    Levine, F. E.

    1980-01-01

    Software added to compiler for automated test system for Space Shuttle decreases computer run errors by providing offline validation of engineering units used system command programs. Validation procedures are general, though originally written for GOAL, a free-form language that accepts "English-like" statements, and may be adapted to other programming languages.

  6. Recommendations for a Retargetable Compiler.

    DTIC Science & Technology

    1980-03-01

    Compiler Project", Computer Science Department, Carnegie-Mellon University, (Feb. 1979), CMU-CS-79-105 LRS74 Lewis , P. M., Rosencrantz, D. J. and Stearns, R...34Machine-Independent Register Allocation", SIGPLAN Notices 14, 8, (Aug. 1979). Ter78 Terman , Christopher J.: "The Specification of Code Generation

  7. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  8. Optimizing Societal Benefit using a Systems Engineering Approach for Implementation of the GEOSS Space Segment

    NASA Technical Reports Server (NTRS)

    Killough, Brian D., Jr.; Sandford, Stephen P.; Cecil, L DeWayne; Stover, Shelley; Keith, Kim

    2008-01-01

    The Group on Earth Observations (GEO) is driving a paradigm shift in the Earth Observation community, refocusing Earth observing systems on GEO Societal Benefit Areas (SBA). Over the short history of space-based Earth observing systems most decisions have been made based on improving our scientific understanding of the Earth with the implicit assumption that this would serve society well in the long run. The space agencies responsible for developing the satellites used for global Earth observations are typically science driven. The innovation of GEO is the call for investments by space agencies to be driven by global societal needs. This paper presents the preliminary findings of an analysis focused on the observational requirements of the GEO Energy SBA. The analysis was performed by the Committee on Earth Observation Satellites (CEOS) Systems Engineering Office (SEO) which is responsible for facilitating the development of implementation plans that have the maximum potential for success while optimizing the benefit to society. The analysis utilizes a new taxonomy for organizing requirements, assesses the current gaps in spacebased measurements and missions, assesses the impact of the current and planned space-based missions, and presents a set of recommendations.

  9. Overcoming obstacles in the implementation of factorial design for assay optimization.

    PubMed

    Shaw, Robert; Fitzek, Martina; Mouchet, Elizabeth; Walker, Graeme; Jarvis, Philip

    2015-03-01

    Factorial experimental design (FED) is a powerful approach for efficient optimization of robust in vitro assays-it enables cost and time savings while also improving the quality of assays. Although it is a well-known technique, there can be considerable barriers to overcome to fully exploit it within an industrial or academic organization. The article describes a tactical roll out of FED to a scientist group through: training which demystifies the technical components and concentrates on principles and examples; a user-friendly Excel-based tool for deconvoluting plate data; output which focuses on graphical display of data over complex statistics. The use of FED historically has generally been in conjunction with automated technology; however we have demonstrated a much broader impact of FED on the assay development process. The standardized approaches we have rolled out have helped to integrate FED as a fundamental part of assay development best practice because it can be used independently of the automation and vendor-supplied software. The techniques are applicable to different types of assay, both enzyme and cell, and can be used flexibly in manual and automated processes. This article describes the application of FED for a cellular assay. The challenges of selling FED concepts and rolling out to a wide bioscience community together with recommendations for good working practices and effective implementation are discussed. The accessible nature of these approaches means FED can be used by industrial as well as academic users.

  10. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  11. Yes! An object-oriented compiler compiler (YOOCC)

    SciTech Connect

    Avotins, J.; Mingins, C.; Schmidt, H.

    1995-12-31

    Grammar-based processor generation is one of the most widely studied areas in language processor construction. However, there have been very few approaches to date that reconcile object-oriented principles, processor generation, and an object-oriented language. Pertinent here also. is that currently to develop a processor using the Eiffel Parse libraries requires far too much time to be expended on tasks that can be automated. For these reasons, we have developed YOOCC (Yes! an Object-Oriented Compiler Compiler), which produces a processor framework from a grammar using an enhanced version of the Eiffel Parse libraries, incorporating the ideas hypothesized by Meyer, and Grape and Walden, as well as many others. Various essential changes have been made to the Eiffel Parse libraries. Examples are presented to illustrate the development of a processor using YOOCC, and it is concluded that the Eiffel Parse libraries are now not only an intelligent, but also a productive option for processor construction.

  12. Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial: Design, rationale and implementation

    PubMed Central

    Baraniuk, Sarah; Tilley, Barbara C.; del Junco, Deborah J.; Fox, Erin E.; van Belle, Gerald; Wade, Charles E.; Podbielski, Jeanette M.; Beeler, Angela M.; Hess, John R.; Bulger, Eileen M.; Schreiber, Martin A.; Inaba, Kenji; Fabian, Timothy C.; Kerby, Jeffrey D.; Cohen, Mitchell J.; Miller, Christopher N.; Rizoli, Sandro; Scalea, Thomas M.; O’Keeffe, Terence; Brasel, Karen J.; Cotton, Bryan A.; Muskat, Peter; Holcomb, John B.

    2014-01-01

    Background Forty percent of in-hospital deaths among injured patients involve massive truncal hemorrhage. These deaths may be prevented with rapid hemorrhage control and improved resuscitation techniques. The Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial was designed to determine if there is a difference in mortality between subjects who received different ratios of FDA approved blood products. This report describes the design and implementation of PROPPR. Study Design PROPPR was designed as a randomized, two-group, Phase III trial conducted in subjects with the highest level of trauma activation and predicted to have a massive transfusion. Subjects at 12 North American level 1 trauma centers were randomized into one of two standard transfusion ratio interventions: 1:1:1 or 1:1:2, (plasma, platelets, and red blood cells). Clinical data and serial blood samples were collected under Exception from Informed Consent (EFIC) regulations. Co-primary mortality endpoints of 24 hours and 30 days were evaluated. Results Between August 2012 and December 2013, 680 patients were randomized. The overall median time from admission to randomization was 26 minutes. PROPPR enrolled at higher than expected rates with fewer than expected protocol deviations. Conclusion PROPPR is the largest randomized study to enroll severely bleeding patients. This study showed that rapidly enrolling and successfully providing randomized blood products to severely injured patients in an EFIC study is feasible. PROPPR was able to achieve these goals by utilizing a collaborative structure and developing successful procedures and design elements that can be part of future trauma studies. PMID:24996573

  13. Electronic control circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A compilation of technical R and D information on circuits and modular subassemblies is presented as a part of a technology utilization program. Fundamental design principles and applications are given. Electronic control circuits discussed include: anti-noise circuit; ground protection device for bioinstrumentation; temperature compensation for operational amplifiers; hybrid gatling capacitor; automatic signal range control; integrated clock-switching control; and precision voltage tolerance detector.

  14. Model compilation for embedded real-time planning and diagnosis

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2004-01-01

    This paper describes MEXEC, an implemented micro executive that compiles a device model into an interal structure. Not only does this structure facilitate computing the most likely current device mode from n sets of sensor measurements, but it also facilitates generating an n step reconfiguration plan that is most likely not to result in reaching a target mode - if such a plan exists.

  15. Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term Low-Cost Implementation

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Ballin, Mark G.; Koczo, Stefan, Jr.; Vivona, Robert A.; Henderson, Jeffrey M.

    2013-01-01

    The concept of Traffic Aware Strategic Aircrew Requests (TASAR) combines Automatic Dependent Surveillance Broadcast (ADS-B) IN and airborne automation to enable user-optimal in-flight trajectory replanning and to increase the likelihood of Air Traffic Control (ATC) approval for the resulting trajectory change request. TASAR is designed as a near-term application to improve flight efficiency or other user-desired attributes of the flight while not impacting and potentially benefiting ATC. Previous work has indicated the potential for significant benefits for each TASAR-equipped aircraft. This paper will discuss the approach to minimizing TASAR's cost for implementation and accelerating readiness for near-term implementation.

  16. Utilizing object-oriented design to build advanced optimization strategies with generic implementation

    SciTech Connect

    Eldred, M.S.; Hart, W.E.; Bohnhoff, W.J.; Romero, V.J.; Hutchinson, S.A.; Salinger, A.G.

    1996-08-01

    the benefits of applying optimization to computational models are well known, but their range of widespread application to date has been limited. This effort attempts to extend the disciplinary areas to which optimization algorithms may be readily applied through the development and application of advanced optimization strategies capable of handling the computational difficulties associated with complex simulation codes. Towards this goal, a flexible software framework is under continued development for the application of optimization techniques to broad classes of engineering applications, including those with high computational expense and nonsmooth, nonconvex design space features. Object-oriented software design with C++ has been employed as a tool in providing a flexible, extensible, and robust multidisciplinary toolkit with computationally intensive simulations. In this paper, demonstrations of advanced optimization strategies using the software are presented in the hybridization and parallel processing research areas. Performance of the advanced strategies is compared with a benchmark nonlinear programming optimization.

  17. Implementation of reactive and predictive real-time control strategies to optimize dry stormwater detention ponds

    NASA Astrophysics Data System (ADS)

    Gaborit, Étienne; Anctil, François; Vanrolleghem, Peter A.; Pelletier, Geneviève

    2013-04-01

    Dry detention ponds have been widely implemented in U.S.A (National Research Council, 1993) and Canada (Shammaa et al. 2002) to mitigate the impacts of urban runoff on receiving water bodies. The aim of such structures is to allow a temporary retention of the water during rainfall events, decreasing runoff velocities and volumes (by infiltration in the pond) as well as providing some water quality improvement from sedimentation. The management of dry detention ponds currently relies on static control through a fixed pre-designed limitation of their maximum outflow (Middleton and Barrett 2008), for example via a proper choice of their outlet pipe diameter. Because these ponds are designed for large storms, typically 1- or 2-hour duration rainfall events with return periods comprised between 5 and 100 years, one of their main drawbacks is that they generally offer almost no retention for smaller rainfall events (Middleton and Barrett 2008), which are by definition much more common. Real-Time Control (RTC) has a high potential for optimizing retention time (Marsalek 2005) because it allows adopting operating strategies that are flexible and hence more suitable to the prevailing fluctuating conditions than static control. For dry ponds, this would basically imply adapting the outlet opening percentage to maximize water retention time, while being able to open it completely for severe storms. This study developed several enhanced RTC scenarios of a dry detention pond located at the outlet of a small urban catchment near Québec City, Canada, following the previous work of Muschalla et al. (2009). The catchment's runoff quantity and TSS concentration were simulated by a SWMM5 model with an improved wash-off formulation. The control procedures rely on rainfall detection and measures of the pond's water height for the reactive schemes, and on rainfall forecasts in addition to these variables for the predictive schemes. The automatic reactive control schemes implemented

  18. Lower bound of optimization in radiological protection system taking account of practical implementation of clearance

    SciTech Connect

    Hattori, Takatoshi

    2007-07-01

    The dose criterion used to derive clearance and exemption levels is of the order of 0.01 mSv/y based on the Basic Safety Standard (BSS) of the International Atomic Energy Agency (IAEA), the use of which has been agreed upon by many countries. It is important for human beings, who are facing the fact that global resources for risk reduction are limited, to carefully consider the practical implementation of radiological protection systems, particularly for low-radiation-dose regions. For example, in direct gamma ray monitoring, to achieve clearance level compliance, difficult issues on how the uncertainty (error) of gamma measurement should be handled and also how the uncertainty (scattering) of the estimation of non-gamma emitters should be treated in clearance must be resolved. To resolve these issues, a new probabilistic approach has been proposed to establish an appropriate safety factor for compliance with the clearance level in Japan. This approach is based on the fundamental concept that 0.1 mSv/y should be complied with the 97.5. percentile of the probability distribution for the uncertainties of both the measurement and estimation of non-gamma emitters. The International Commission on Radiological Protection, ICRP published a new concept of the representative person in Publication 101 Part I. The representative person is a hypothetical person exposed to a dose that is representative of those of highly exposed persons in a population. In a probabilistic dose assessment, the ICRP recommends that the representative person should be defined such that the probability of exposure occurrence is lower than about 5% that of a person randomly selected from the population receiving a high dose. From the new concept of the ICRP, it is reasonable to consider that the 95. percentile of the dose distribution for the representative person is theoretically always lower than the dose constraint. Using this established relationship, it can be concluded that the minimum dose

  19. Branch recovery with compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Li, C.-C.; Fuchs, W. K.; Hwu, W.-M.

    1992-01-01

    In processing systems where rapid recovery from transient faults is important, schemes for multiple instruction rollback recovery may be appropriate. Multiple instruction retry has been implemented in hardware by researchers and also in mainframe computers. This paper extends compiler-assisted instruction retry to a broad class of code execution failures. Five benchmarks were used to measure the performance penalty of hazard resolution. Results indicate that the enhanced pure software approach can produce performance penalties consistent with existing hardware techniques. A combined compiler/hardware resolution strategy is also described and evaluated. Experimental results indicate a lower performance penalty than with either a totally hardware or totally software approach.

  20. 14 CFR § 1203.302 - Compilation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM... unclassified may be classified if the compiled information reveals an additional association or relationship... individual items of information. As used in the Order, compilations mean an aggregate of...

  1. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  2. An Implementation of a Mathematical Programming Approach to Optimal Enrollments. AIR 2001 Annual Forum Paper.

    ERIC Educational Resources Information Center

    DePaolo, Concetta A.

    This paper explores the application of a mathematical optimization model to the problem of optimal enrollments. The general model, which can be applied to any institution, seeks to enroll the "best" class of students (as defined by the institution) subject to constraints imposed on the institution (e.g., capacity, quality). Topics…

  3. Optimal speech codec implementation on ARM9E (v5E architecture) RISC processor for next-generation mobile multimedia

    NASA Astrophysics Data System (ADS)

    Bangla, Ajay Kumar; Vinay, M. K.; Suresh Babu, P. V.

    2004-01-01

    The mobile phone is undergoing a rapid evolution from a voice and limited text-messaging device to a complete multimedia client. RISC processors are predominantly used in these devices due to low cost, time to market and power consumption. The growing demand for signal processing performance on these platforms has triggered a convergence of RISC, CISC and DSP technologies on to a single core/system. This convergence leads to a multitude of challenges for optimal usage of available processing power. Voice codecs, which have been traditionally implemented on DSP platforms, have been adapted to sole RISC platforms as well. In this paper, the issues involved in optimizing a standard vocoder for RISC-DSP convergence platform (DSP enhanced RISC platforms) are addressed. Our optimization techniques are based on identification of algorithms, which could exploit either the DSP features or the RISC features or both. A few algorithmic modifications have also been suggested. By a systematic application of these optimization techniques for a GSM-AMR (NB) codec on ARM9E core, we could achieve more than 77% improvement over the baseline codec and almost 33% over that optimized for a RISC platform (ARM9T) alone in terms of processing cycle requirements. The optimization techniques outlined are generic in nature and are applicable to other vocoders on similar 'application-platform" combinations.

  4. The Katydid system for compiling KEE applications to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.

  5. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  6. Distributed memory compiler design for sparse problems

    NASA Technical Reports Server (NTRS)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  7. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  8. Rooted-tree network for optimal non-local gate implementation

    NASA Astrophysics Data System (ADS)

    Vyas, Nilesh; Saha, Debashis; Panigrahi, Prasanta K.

    2016-09-01

    A general quantum network for implementing non-local control-unitary gates, between remote parties at minimal entanglement cost, is shown to be a rooted-tree structure. Starting from a five-party scenario, we demonstrate the local implementation of simultaneous class of control-unitary(Hermitian) and multiparty control-unitary gates in an arbitrary n-party network. Previously, established networks are turned out to be special cases of this general construct.

  9. Stepwise optimization approach for improving LC-MS/MS analysis of zwitterionic antiepileptic drugs with implementation of experimental design.

    PubMed

    Kostić, Nađa; Dotsikas, Yannis; Malenović, Anđelija; Jančić Stojanović, Biljana; Rakić, Tijana; Ivanović, Darko; Medenica, Mirjana

    2013-07-01

    In this article, a step-by-step optimization procedure for improving analyte response with implementation of experimental design is described. Zwitterionic antiepileptics, namely vigabatrin, pregabalin and gabapentin, were chosen as model compounds to undergo chloroformate-mediated derivatization followed by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) analysis. Application of a planned stepwise optimization procedure allowed responses of analytes, expressed as areas and signal-to-noise ratios, to be improved, enabling achievement of lower limit of detection values. Results from the current study demonstrate that optimization of parameters such as scan time, geometry of ion source, sheath and auxiliary gas pressure, capillary temperature, collision pressure and mobile phase composition can have a positive impact on sensitivity of LC-MS/MS methods. Optimization of LC and MS parameters led to a total increment of 53.9%, 83.3% and 95.7% in areas of derivatized vigabatrin, pregabalin and gabapentin, respectively, while for signal-to-noise values, an improvement of 140.0%, 93.6% and 124.0% was achieved, compared to autotune settings. After defining the final optimal conditions, a time-segmented method was validated for the determination of mentioned drugs in plasma. The method proved to be accurate and precise with excellent linearity for the tested concentration range (40.0 ng ml(-1)-10.0 × 10(3)  ng ml(-1)).

  10. Optimizing Blocking and Nonblocking Reduction Operations for Multicore Systems: Hierarchical Design and Implementation

    SciTech Connect

    Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L; Ladd, Joshua S; Sampath, Rahul S

    2013-01-01

    Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth of hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate

  11. FPGA implementation of a stochastic neural network for monotonic pseudo-Boolean optimization.

    PubMed

    Grossi, Giuliano; Pedersini, Federico

    2008-08-01

    In this paper a FPGA implementation of a novel neural stochastic model for solving constrained NP-hard problems is proposed and developed. The model exploits pseudo-Boolean functions both to express the constraints and to define the cost function, interpreted as energy of a neural network. A wide variety of NP-hard problems falls in the class of problems that can be solved by this model, particularly those having a quadratic pseudo-Boolean penalty function. The proposed hardware implementation provides high computation speed by exploiting parallelism, as the neuron update and the constraint violation check can be performed in parallel over the whole network. The neural system has been tested on random and benchmark graphs, showing good performance with respect to the same heuristic for the same problems. Furthermore, the computational speed of the FPGA implementation has been measured and compared to software implementation. The developed architecture featured dramatically faster computation, with respect to the software implementation, even adopting a low-cost FPGA chip.

  12. Efficient implementation and application of the artificial bee colony algorithm to low-dimensional optimization problems

    NASA Astrophysics Data System (ADS)

    von Rudorff, Guido Falk; Wehmeyer, Christoph; Sebastiani, Daniel

    2014-06-01

    We adapt a swarm-intelligence-based optimization method (the artificial bee colony algorithm, ABC) to enhance its parallel scaling properties and to improve the escaping behavior from deep local minima. Specifically, we apply the approach to the geometry optimization of Lennard-Jones clusters. We illustrate the performance and the scaling properties of the parallelization scheme for several system sizes (5-20 particles). Our main findings are specific recommendations for ranges of the parameters of the ABC algorithm which yield maximal performance for Lennard-Jones clusters and Morse clusters. The suggested parameter ranges for these different interaction potentials turn out to be very similar; thus, we believe that our reported values are fairly general for the ABC algorithm applied to chemical optimization problems.

  13. Implementation of a Low-Thrust Trajectory Optimization Algorithm for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Sims, Jon A.; Finlayson, Paul A.; Rinderle, Edward A.; Vavrina, Matthew A.; Kowalkowski, Theresa D.

    2006-01-01

    A tool developed for the preliminary design of low-thrust trajectories is described. The trajectory is discretized into segments and a nonlinear programming method is used for optimization. The tool is easy to use, has robust convergence, and can handle many intermediate encounters. In addition, the tool has a wide variety of features, including several options for objective function and different low-thrust propulsion models (e.g., solar electric propulsion, nuclear electric propulsion, and solar sail). High-thrust, impulsive trajectories can also be optimized.

  14. Design methodology for optimal hardware implementation of wavelet transform domain algorithms

    NASA Astrophysics Data System (ADS)

    Johnson-Bey, Charles; Mickens, Lisa P.

    2005-05-01

    The work presented in this paper lays the foundation for the development of an end-to-end system design methodology for implementing wavelet domain image/video processing algorithms in hardware using Xilinx field programmable gate arrays (FPGAs). With the integration of the Xilinx System Generator toolbox, this methodology will allow algorithm developers to design and implement their code using the familiar MATLAB/Simulink development environment. By using this methodology, algorithm developers will not be required to become proficient in the intricacies of hardware design, thus reducing the design cycle and time-to-market.

  15. PDoublePop: An implementation of parallel genetic algorithm for function optimization

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Tzallas, Alexandros; Tsalikakis, Dimitris

    2016-12-01

    A software for the implementation of parallel genetic algorithms is presented in this article. The underlying genetic algorithm is aimed to locate the global minimum of a multidimensional function inside a rectangular hyperbox. The proposed software named PDoublePop implements a client-server model for parallel genetic algorithms with advanced features for the local genetic algorithms such as: an enhanced stopping rule, an advanced mutation scheme and periodical application of a local search procedure. The user may code the objective function either in C++ or in Fortran77. The method is tested on a series of well-known test functions and the results are reported.

  16. A survey of compiler development aids. [concerning lexical, syntax, and semantic analysis

    NASA Technical Reports Server (NTRS)

    Buckles, B. P.; Hodges, B. C.; Hsia, P.

    1977-01-01

    A theoretical background was established for the compilation process by dividing it into five phases and explaining the concepts and algorithms that underpin each. The five selected phases were lexical analysis, syntax analysis, semantic analysis, optimization, and code generation. Graph theoretical optimization techniques were presented, and approaches to code generation were described for both one-pass and multipass compilation environments. Following the initial tutorial sections, more than 20 tools that were developed to aid in the process of writing compilers were surveyed. Eight of the more recent compiler development aids were selected for special attention - SIMCMP/STAGE2, LANG-PAK, COGENT, XPL, AED, CWIC, LIS, and JOCIT. The impact of compiler development aids were assessed some of their shortcomings and some of the areas of research currently in progress were inspected.

  17. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  18. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  19. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  20. SVD-based optimal filtering for noise reduction in dual microphone hearing aids: a real time implementation and perceptual evaluation.

    PubMed

    Maj, Jean-Baptiste; Royackers, Liesbeth; Moonen, Marc; Wouters, Jan

    2005-09-01

    In this paper, the first real-time implementation and perceptual evaluation of a singular value decomposition (SVD)-based optimal filtering technique for noise reduction in a dual microphone behind-the-ear (BTE) hearing aid is presented. This evaluation was carried out for a speech weighted noise and multitalker babble, for single and multiple jammer sound source scenarios. Two basic microphone configurations in the hearing aid were used. The SVD-based optimal filtering technique was compared against an adaptive beamformer, which is known to give significant improvements in speech intelligibility in noisy environment. The optimal filtering technique works without assumptions about a speaker position, unlike the two-stage adaptive beamformer. However this strategy needs a robust voice activity detector (VAD). A method to improve the performance of the VAD was presented and evaluated physically. By connecting the VAD to the output of the noise reduction algorithms, a good discrimination between the speech-and-noise periods and the noise-only periods of the signals was obtained. The perceptual experiments demonstrated that the SVD-based optimal filtering technique could perform as well as the adaptive beamformer in a single noise source scenario, i.e., the ideal scenario for the latter technique, and could outperform the adaptive beamformer in multiple noise source scenarios.

  1. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  2. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices.

    PubMed

    Marin, Leandro; Pawlowski, Marcin Piotr; Jara, Antonio

    2015-08-28

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol.

  3. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices

    PubMed Central

    Marin, Leandro; Piotr Pawlowski, Marcin; Jara, Antonio

    2015-01-01

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol. PMID:26343677

  4. Optimization of Simulation-Based Training Systems: Model Description, Implementation, and Evaluation

    DTIC Science & Technology

    1990-06-01

    Air Force Human Research Laboratory at Williams AFB, and National Aeronautics and Space Administration Ames Research Center. The models and techniques...from budgetary limitations, space and equipment availability, and safety concerns. Some of the complexity of training-system design is caused by...OPT/A322: Comcuate optim nde benfits ofpns. Thi s pro s fouropersations.or Thertimal trainingtdevih e d sigmn r e bad upon N oth incr ment al bo en

  5. Optimal parameters for clinical implementation of breast cancer patient setup using Varian DTS software.

    PubMed

    Ng, Sook Kien; Zygmanski, Piotr; Jeung, Andrew; Mostafavi, Hassan; Hesser, Juergen; Bellon, Jennifer R; Wong, Julia S; Lyatskaya, Yulia

    2012-05-10

    Digital tomosynthesis (DTS) was evaluated as an alternative to cone-beam computed tomography (CBCT) for patient setup. DTS is preferable when there are constraints with setup time, gantry-couch clearance, and imaging dose using CBCT. This study characterizes DTS data acquisition and registration parameters for the setup of breast cancer patients using nonclinical Varian DTS software. DTS images were reconstructed from CBCT projections acquired on phantoms and patients with surgical clips in the target volume. A shift-and-add algorithm was used for DTS volume reconstructions, while automated cross-correlation matches were performed within Varian DTS software. Triangulation on two short DTS arcs separated by various angular spread was done to improve 3D registration accuracy. Software performance was evaluated on two phantoms and ten breast cancer patients using the registration result as an accuracy measure; investigated parameters included arc lengths, arc orientations, angular separation between two arcs, reconstruction slice spacing, and number of arcs. The shifts determined from DTS-to-CT registration were compared to the shifts based on CBCT-to-CT registration. The difference between these shifts was used to evaluate the software accuracy. After findings were quantified, optimal parameters for the clinical use of DTS technique were determined. It was determined that at least two arcs were necessary for accurate 3D registration for patient setup. Registration accuracy of 2 mm was achieved when the reconstruction arc length was > 5° for clips with HU ≥ 1000; larger arc length (≥ 8°) was required for very low HU clips. An optimal arc separation was found to be ≥ 20° and optimal arc length was 10°. Registration accuracy did not depend on DTS slice spacing. DTS image reconstruction took 10-30 seconds and registration took less than 20 seconds. The performance of Varian DTS software was found suitable for the accurate setup of breast cancer patients

  6. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  7. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  8. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  9. Optimization of a hardware implementation for pulse coupled neural networks for image applications

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal

    2010-04-01

    Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.

  10. Lattice dynamical wavelet neural networks implemented using particle swarm optimization for spatio-temporal system identification.

    PubMed

    Wei, Hua-Liang; Billings, Stephen A; Zhao, Yifan; Guo, Lingzhong

    2009-01-01

    In this brief, by combining an efficient wavelet representation with a coupled map lattice model, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNNs), is introduced for spatio-temporal system identification. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimization (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the OPP algorithm, significant wavelet neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated wavelet neurons are optimized using a particle swarm optimizer. The resultant network model, obtained in the first stage, however, may be redundant. In the second stage, an orthogonal least squares algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet neurons from the network. An example for a real spatio-temporal system identification problem is presented to demonstrate the performance of the proposed new modeling framework.

  11. Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark

    SciTech Connect

    Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik; Deshpande, Anand M.; Straalen, Brian Van; Smelyanskiy, Mikhail; Almgren, Ann; Dubey, Pradeep; Shalf, John; Oliker, Leonid

    2012-12-01

    Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding, dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.

  12. Optimization of ion exchange sigmoidal gradients using hybrid models: Implementation of quality by design in analytical method development.

    PubMed

    Joshi, Varsha S; Kumar, Vijesh; Rathore, Anurag S

    2017-03-31

    Thorough product understanding is one of the basic tenets for successful implementation of Quality by Design (QbD). Complexity encountered in analytical characterization of biotech therapeutics such as monoclonal antibodies (mAbs) requires novel, simpler, and generic approaches towards product characterization. This paper presents a methodology for implementation of QbD for analytical method development. Optimization of an analytical cation exchange high performance liquid chromatography (CEX-HPLC) method utilizing a sigmoidal gradient has been performed using a hybrid mechanistic model that is based on Design of experiment (DOE) based studies. Since sigmodal gradients are much more complex than the traditional linear gradients and have a large number of input parameters (five) for optimization, the number of DOE experiments required for a full factorial design to estimate all the main effects as well as the interactions would be too large (243). To address this problem, a mechanistic model was used to simulate the analytical separation for the DOE and then the results were used to build an empirical model. The mechanistic model used in this work is a more versatile general rate model in combination of modified Langmuir binding kinetics. The modified Langmuir model is capable of modelling the impact of nonlinear changes in the concentration of the salt modifier. Further, to get the input and output profiles of mAb and salts/buffers, the HPLC system, consisting of the mixer, detectors, and tubing was modelled as a sequence of dispersed plug flow reactors and continuous stirred tank reactors (CSTR). The experimental work was limited to calibration of the HPLC system and finding the model parameters through three linear gradients. To simplify the optimization process, only three peaks in the centre of the profile (main product and the adjacent acidic and basic variants) were chosen to determine the final operating condition. The regression model made from the DoE data

  13. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  14. Optimized OpenCL implementation of the Elastodynamic Finite Integration Technique for viscoelastic media

    NASA Astrophysics Data System (ADS)

    Molero-Armenta, M.; Iturrarán-Viveros, Ursula; Aparicio, S.; Hernández, M. G.

    2014-10-01

    Development of parallel codes that are both scalable and portable for different processor architectures is a challenging task. To overcome this limitation we investigate the acceleration of the Elastodynamic Finite Integration Technique (EFIT) to model 2-D wave propagation in viscoelastic media by using modern parallel computing devices (PCDs), such as multi-core CPUs (central processing units) and GPUs (graphics processing units). For that purpose we choose the industry open standard Open Computing Language (OpenCL) and an open-source toolkit called PyOpenCL. The implementation is platform independent and can be used on AMD or NVIDIA GPUs as well as classical multi-core CPUs. The code is based on the Kelvin-Voigt mechanical model which has the gain of not requiring additional field variables. OpenCL performance can be in principle, improved once one can eliminate global memory access latency by using local memory. Our main contribution is the implementation of local memory and an analysis of performance of the local versus the global memory using eight different computing devices (including Kepler, one of the fastest and most efficient high performance computing technology) with various operating systems. The full implementation of the code is included.

  15. Optimal FPGA implementation of CL multiwavelets architecture for signal denoising application

    NASA Astrophysics Data System (ADS)

    Mohan Kumar, B.; Vidhya Lavanya, R.; Sumesh, E. P.

    2013-03-01

    Wavelet transform is considered one of the efficient transforms of this decade for real time signal processing. Due to implementation constraints scalar wavelets do not possess the properties such as compact support, regularity, orthogonality and symmetry, which are desirable qualities to provide a good signal to noise ratio (SNR) in case of signal denoising. This leads to the evolution of the new dimension of wavelet called 'multiwavelets', which possess more than one scaling and wavelet filters. The architecture implementation of multiwavelets is an emerging area of research. In real time, the signals are in scalar form, which demands the processing architecture to be scalar. But the conventional Donovan Geronimo Hardin Massopust (DGHM) and Chui-Lian (CL) multiwavelets are vectored and are also unbalanced. In this article, the vectored multiwavelet transforms are converted into a scalar form and its architecture is implemented in FPGA (Field Programmable Gate Array) for signal denoising application. The architecture is compared with DGHM multiwavelets architecture in terms of several objective and performance measures. The CL multiwavelets architecture is further optimised for best performance by using DSP48Es. The results show that CL multiwavelet architecture is suited better for the signal denoising application.

  16. Expected treatment dose construction and adaptive inverse planning optimization: Implementation for offline head and neck cancer adaptive radiotherapy

    SciTech Connect

    Yan Di; Liang Jian

    2013-02-15

    : Adaptive treatment modification can be implemented including the expected treatment dose in the adaptive inverse planning optimization. The retrospective evaluation results demonstrate that utilizing the weekly adaptive inverse planning optimization, the dose distribution of h and n cancer treatment can be largely improved.

  17. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first

  18. The RHNumtS compilation: Features and bioinformatics approaches to locate and quantify Human NumtS

    PubMed Central

    Lascaro, Daniela; Castellana, Stefano; Gasparre, Giuseppe; Romeo, Giovanni; Saccone, Cecilia; Attimonelli, Marcella

    2008-01-01

    Background To a greater or lesser extent, eukaryotic nuclear genomes contain fragments of their mitochondrial genome counterpart, deriving from the random insertion of damaged mtDNA fragments. NumtS (Nuclear mt Sequences) are not equally abundant in all species, and are redundant and polymorphic in terms of copy number. In population and clinical genetics, it is important to have a complete overview of NumtS quantity and location. Searching PubMed for NumtS or Mitochondrial pseudo-genes yields hundreds of papers reporting Human NumtS compilations produced by in silico or wet-lab approaches. A comparison of published compilations clearly shows significant discrepancies among data, due both to unwise application of Bioinformatics methods and to a not yet correctly assembled nuclear genome. To optimize quantification and location of NumtS, we produced a consensus compilation of Human NumtS by applying various bioinformatics approaches. Results Location and quantification of NumtS may be achieved by applying database similarity searching methods: we have applied various methods such as Blastn, MegaBlast and BLAT, changing both parameters and database; the results were compared, further analysed and checked against the already published compilations, thus producing the Reference Human Numt Sequences (RHNumtS) compilation. The resulting NumtS total 190. Conclusion The RHNumtS compilation represents a highly reliable reference basis, which may allow designing a lab protocol to test the actual existence of each NumtS. Here we report preliminary results based on PCR amplification and sequencing on 41 NumtS selected from RHNumtS among those with lower score. In parallel, we are currently designing the RHNumtS database structure for implementation in the HmtDB resource. In the future, the same database will host NumtS compilations from other organisms, but these will be generated only when the nuclear genome of a specific organism has reached a high-quality level of assembly

  19. Design and implementation of a delay-optimized universal programmable routing circuit for FPGAs

    NASA Astrophysics Data System (ADS)

    Fang, Wu; Huowen, Zhang; Jinmei, Lai; Yuan, Wang; Liguang, Chen; Lei, Duan; Jiarong, Tong

    2009-06-01

    This paper presents a universal field programmable gate array (FPGA) programmable routing circuit, focusing primarily on a delay optimization. Under the precondition of the routing resource's flexibility and routability, the number of programmable interconnect points (PIP) is reduced, and a multiplexer (MUX) plus a BUFFER structure is adopted as the programmable switch. Also, the method of offset lines and the method of complementary hanged end-lines are applied to the TILE routing circuit and the I/O routing circuit, respectively. All of the above features ensure that the whole FPGA chip is highly repeatable, and the signal delay is uniform and predictable over the total chip. Meanwhile, the BUFFER driver is optimized to decrease the signal delay by up to 5%. The proposed routing circuit is applied to the Fudan programmable device (FDP) FPGA, which has been taped out with an SMIC 0.18-μm logic 1P6M process. The test result shows that the programmable routing resource works correctly, and the signal delay over the chip is highly uniform and predictable.

  20. Toward a Fundamental Theory of Optimal Feature Selection: Part II-Implementation and Computational Complexit.

    PubMed

    Morgera, S D

    1987-01-01

    Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules. These modules may be implemented as linear arrays of processing elements having at most O(N) elements where N is the input data vector dimension. The computations may be done in O(N) time steps. This compares favorably to O(N3) operations for a conventional, or general, rotation-based eigensystem solver and even the O(2N2) operations using an approach incorporating the fast Levinson algorithm for a matrix of Toeplitz structure since the underlying matrix in this work does not possess a Toeplitz structure. Some examples are provided on the convergence of a conventional iterative approach and a novel two-stage iterative method for eigensystem decomposition.

  1. Direct Methods for Predicting Movement Biomechanics Based Upon Optimal Control Theory with Implementation in OpenSim.

    PubMed

    Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G

    2016-08-01

    The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.

  2. Optimization and Implementation of Scaling-Free CORDIC-Based Direct Digital Frequency Synthesizer for Body Care Area Network Systems

    PubMed Central

    Juang, Ying-Shen; Ko, Lu-Ting; Chen, Jwu-E.; Sung, Tze-Yun; Hsin, Hsi-Chin

    2012-01-01

    Coordinate rotation digital computer (CORDIC) is an efficient algorithm for computations of trigonometric functions. Scaling-free-CORDIC is one of the famous CORDIC implementations with advantages of speed and area. In this paper, a novel direct digital frequency synthesizer (DDFS) based on scaling-free CORDIC is presented. The proposed multiplier-less architecture with small ROM and pipeline data path has advantages of high data rate, high precision, high performance, and less hardware cost. The design procedure with performance and hardware analysis for optimization has also been given. It is verified by Matlab simulations and then implemented with field programmable gate array (FPGA) by Verilog. The spurious-free dynamic range (SFDR) is over 86.85 dBc, and the signal-to-noise ratio (SNR) is more than 81.12 dB. The scaling-free CORDIC-based architecture is suitable for VLSI implementations for the DDFS applications in terms of hardware cost, power consumption, SNR, and SFDR. The proposed DDFS is very suitable for medical instruments and body care area network systems. PMID:23251230

  3. Optimization and implementation of scaling-free CORDIC-based direct digital frequency synthesizer for body care area network systems.

    PubMed

    Juang, Ying-Shen; Ko, Lu-Ting; Chen, Jwu-E; Sung, Tze-Yun; Hsin, Hsi-Chin

    2012-01-01

    Coordinate rotation digital computer (CORDIC) is an efficient algorithm for computations of trigonometric functions. Scaling-free-CORDIC is one of the famous CORDIC implementations with advantages of speed and area. In this paper, a novel direct digital frequency synthesizer (DDFS) based on scaling-free CORDIC is presented. The proposed multiplier-less architecture with small ROM and pipeline data path has advantages of high data rate, high precision, high performance, and less hardware cost. The design procedure with performance and hardware analysis for optimization has also been given. It is verified by Matlab simulations and then implemented with field programmable gate array (FPGA) by Verilog. The spurious-free dynamic range (SFDR) is over 86.85 dBc, and the signal-to-noise ratio (SNR) is more than 81.12 dB. The scaling-free CORDIC-based architecture is suitable for VLSI implementations for the DDFS applications in terms of hardware cost, power consumption, SNR, and SFDR. The proposed DDFS is very suitable for medical instruments and body care area network systems.

  4. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  5. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  6. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  7. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  8. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  9. Optimized hierarchical equations of motion theory for Drude dissipation and efficient implementation to nonlinear spectroscopies.

    PubMed

    Ding, Jin-Jin; Xu, Jian; Hu, Jie; Xu, Rui-Xue; Yan, YiJing

    2011-10-28

    Hierarchical equations of motion theory for Drude dissipation is optimized, with a convenient convergence criterion proposed in advance of numerical propagations. The theoretical construction is on the basis of a Padé spectrum decomposition that has been qualified to be the best sum-over-poles scheme for quantum distribution function. The resulting hierarchical dynamics under the a priori convergence criterion are exemplified with a benchmark spin-boson system, and also the transient absorption and related coherent two-dimensional spectroscopy of a model exciton dimer system. We combine the present theory with several advanced techniques such as the block hierarchical dynamics in mixed Heisenberg-Schrödinger picture and the on-the-fly filtering algorithm for the efficient evaluation of third-order optical response functions.

  10. Optimization of the Coupled Cluster Implementation in NWChem on Petascale Parallel Architectures

    SciTech Connect

    Anisimov, Victor; Bauer, Gregory H.; Chadalavada, Kalyana; Olson, Ryan M.; Glenski, Joseph W.; Kramer, William T.; Apra, Edoardo; Kowalski, Karol

    2014-09-04

    Coupled cluster singles and doubles (CCSD) algorithm has been optimized in NWChem software package. This modification alleviated the communication bottleneck and provided from 2- to 5-fold speedup in the CCSD iteration time depending on the problem size and available memory. Sustained 0.60 petaflop/sec performance on CCSD(T) calculation has been obtained on NCSA Blue Waters. This number included all stages of the calculation from initialization till termination, iterative computation of single and double excitations, and perturbative accounting for triple excitations. In the section of perturbative triples alone, the computation maintained 1.18 petaflop/sec performance level. CCSD computations have been performed on Guanine-Cytosine deoxydinucleotide monophosphate (GC-dDMP) to probe the conformational energy difference in DNA single strand in A- and B-conformations. The computation revealed significant discrepancy between CCSD and classical force fields in prediction of relative energy of A- and B-conformations of GC-dDMP.

  11. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  12. Optimized FPGA Implementation of the Thyroid Hormone Secretion Mechanism Using CAD Tools.

    PubMed

    Alghazo, Jaafar M

    2017-02-01

    The goal of this paper is to implement the secretion mechanism of the Thyroid Hormone (TH) based on bio-mathematical differential eqs. (DE) on an FPGA chip. Hardware Descriptive Language (HDL) is used to develop a behavioral model of the mechanism derived from the DE. The Thyroid Hormone secretion mechanism is simulated with the interaction of the related stimulating and inhibiting hormones. Synthesis of the simulation is done with the aid of CAD tools and downloaded on a Field Programmable Gate Arrays (FPGAs) Chip. The chip output shows identical behavior to that of the designed algorithm through simulation. It is concluded that the chip mimics the Thyroid Hormone secretion mechanism. The chip, operating in real-time, is computer-independent stand-alone system.

  13. The Columbia-Presbyterian Medical Center decision-support system as a model for implementing the Arden Syntax.

    PubMed

    Hripcsak, G; Cimino, J J; Johnson, S B; Clayton, P D

    1991-01-01

    Columbia-Presbyterian Medical Center is implementing a decision-support system based on the Arden Syntax for Medical Logic Modules (MLM's). The system uses a compiler-interpreter pair. MLM's are first compiled into pseudo-codes, which are instructions for a virtual machine. The MLM's are then executed using an interpreter that emulates the virtual machine. This design has resulted in increased portability, easier debugging and verification, and more compact compiled MLM's. The time spent interpreting the MLM pseudo-codes has been found to be insignificant compared to database accesses. The compiler, which is written using the tools "lex" and "yacc," optimizes MLM's by minimizing the number of database accesses. The interpreter emulates a stack-oriented machine. A phased implementation of the syntax was used to speed the development of the system.

  14. The Columbia-Presbyterian Medical Center decision-support system as a model for implementing the Arden Syntax.

    PubMed Central

    Hripcsak, G.; Cimino, J. J.; Johnson, S. B.; Clayton, P. D.

    1991-01-01

    Columbia-Presbyterian Medical Center is implementing a decision-support system based on the Arden Syntax for Medical Logic Modules (MLM's). The system uses a compiler-interpreter pair. MLM's are first compiled into pseudo-codes, which are instructions for a virtual machine. The MLM's are then executed using an interpreter that emulates the virtual machine. This design has resulted in increased portability, easier debugging and verification, and more compact compiled MLM's. The time spent interpreting the MLM pseudo-codes has been found to be insignificant compared to database accesses. The compiler, which is written using the tools "lex" and "yacc," optimizes MLM's by minimizing the number of database accesses. The interpreter emulates a stack-oriented machine. A phased implementation of the syntax was used to speed the development of the system. PMID:1807598

  15. Evaluation of HDPE and LDPE degradation by fungus, implemented by statistical optimization

    PubMed Central

    Ojha, Nupur; Pradhan, Neha; Singh, Surjit; Barla, Anil; Shrivastava, Anamika; Khatua, Pradip; Rai, Vivek; Bose, Sutapa

    2017-01-01

    Plastic in any form is a nuisance to the well-being of the environment. The ‘pestilence’ caused by it is mainly due to its non-degradable nature. With the industrial boom and the population explosion, the usage of plastic products has increased. A steady increase has been observed in the use of plastic products, and this has accelerated the pollution. Several attempts have been made to curb the problem at large by resorting to both chemical and biological methods. Chemical methods have only resulted in furthering the pollution by releasing toxic gases into the atmosphere; whereas; biological methods have been found to be eco-friendly however they are not cost effective. This paves the way for the current study where fungal isolates have been used to degrade polyethylene sheets (HDPE, LDPE). Two potential fungal strains, namely, Penicillium oxalicum NS4 (KU559906) and Penicillium chrysogenum NS10 (KU559907) had been isolated and identified to have plastic degrading abilities. Further, the growth medium for the strains was optimized with the help of RSM. The plastic sheets were subjected to treatment with microbial culture for 90 days. The extent of degradation was analyzed by, FE-SEM, AFM and FTIR. Morphological changes in the plastic sheet were determined. PMID:28051105

  16. Evaluation of HDPE and LDPE degradation by fungus, implemented by statistical optimization

    NASA Astrophysics Data System (ADS)

    Ojha, Nupur; Pradhan, Neha; Singh, Surjit; Barla, Anil; Shrivastava, Anamika; Khatua, Pradip; Rai, Vivek; Bose, Sutapa

    2017-01-01

    Plastic in any form is a nuisance to the well-being of the environment. The ‘pestilence’ caused by it is mainly due to its non-degradable nature. With the industrial boom and the population explosion, the usage of plastic products has increased. A steady increase has been observed in the use of plastic products, and this has accelerated the pollution. Several attempts have been made to curb the problem at large by resorting to both chemical and biological methods. Chemical methods have only resulted in furthering the pollution by releasing toxic gases into the atmosphere; whereas; biological methods have been found to be eco-friendly however they are not cost effective. This paves the way for the current study where fungal isolates have been used to degrade polyethylene sheets (HDPE, LDPE). Two potential fungal strains, namely, Penicillium oxalicum NS4 (KU559906) and Penicillium chrysogenum NS10 (KU559907) had been isolated and identified to have plastic degrading abilities. Further, the growth medium for the strains was optimized with the help of RSM. The plastic sheets were subjected to treatment with microbial culture for 90 days. The extent of degradation was analyzed by, FE-SEM, AFM and FTIR. Morphological changes in the plastic sheet were determined.

  17. Development and implementation of a coupled computational muscle force optimization bone shape adaptation modeling method.

    PubMed

    Florio, C S

    2015-04-01

    Improved methods to analyze and compare the muscle-based influences that drive bone strength adaptation can aid in the understanding of the wide array of experimental observations about the effectiveness of various mechanical countermeasures to losses in bone strength that result from age, disuse, and reduced gravity environments. The coupling of gradient-based and gradientless numerical optimization routines with finite element methods in this work results in a modeling technique that determines the individual magnitudes of the muscle forces acting in a multisegment musculoskeletal system and predicts the improvement in the stress state uniformity and, therefore, strength, of a targeted bone through simulated local cortical material accretion and resorption. With a performance-based stopping criteria, no experimentally based or system-based parameters, and designed to include the direct and indirect effects of muscles attached to the targeted bone as well as to its neighbors, shape and strength alterations resulting from a wide range of boundary conditions can be consistently quantified. As demonstrated in a representative parametric study, the developed technique effectively provides a clearer foundation for the study of the relationships between muscle forces and the induced changes in bone strength. Its use can lead to the better control of such adaptive phenomena.

  18. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  19. Optimizing the business and IT relationship--a structured approach to implementing a business relationship management framework.

    PubMed

    Mohrmann, Gregg; Kraatz, Drew; Sessa, Bonnie

    2009-01-01

    The relationship between the business and the IT organization is an area where many healthcare providers experience challenges. IT is often perceived as a service provider rather than a partner in delivering quality patient care. Organizations are finding that building a stronger partnership between business and IT leads to increased understanding and appreciation of the technology, process changes and services that can enhance the delivery of care and maximize organizational success. This article will provide a detailed description of valuable techniques for optimizing the healthcare organization's business and IT relationship; considerations on how to implement those techniques; and a description of the key benefits an organization should realize. Using a case study of a healthcare provider that leveraged these techniques, the article will show how an organization can promote this paradigm shift and create a tighter integration between the business and IT.

  20. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different

  1. Optimization of the Implementation of Managed Aquifer Recharge - Effects of Aquifer Heterogeneity

    NASA Astrophysics Data System (ADS)

    Maliva, Robert; Missimer, Thomas; Kneppers, Angeline

    2010-05-01

    more successful MAR implementation as a tool for improved water resources management.

  2. Automating Visualization Service Generation with the WATT Compiler

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web

  3. Regulatory and technical reports compilation for 1980

    SciTech Connect

    Oliu, W.E.; McKenzi, L.

    1981-04-01

    This compilation lists formal regulatory and technical reports and conference proceedings issued in 1980 by the US Nuclear Regulatory Commission. The compilation is divided into four major sections. The first major section consists of a sequential listing of all NRC reports in report-number order. The second major section of this compilation consists of a key-word index to report titles. The third major section contains an alphabetically arranged listing of contractor report numbers cross-referenced to their corresponding NRC report numbers. Finally, the fourth section is an errata supplement.

  4. Compiling Planning into Scheduling: A Sketch

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Crawford, James M.; Smith, David E.

    2004-01-01

    Although there are many approaches for compiling a planning problem into a static CSP or a scheduling problem, current approaches essentially preserve the structure of the planning problem in the encoding. In this pape: we present a fundamentally different encoding that more accurately resembles a scheduling problem. We sketch the approach and argue, based on an example, that it is possible to automate the generation of such an encoding for problems with certain properties and thus produce a compiler of planning into scheduling problems. Furthermore we argue that many NASA problems exhibit these properties and that such a compiler would provide benefits to both theory and practice.

  5. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  6. Analysis, optimization, and implementation of a hybrid DS/FFH spread-spectrum technique for smart grid communications

    SciTech Connect

    Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; Kuruganti, Teja; Smith, Stephen F.; Djouadi, Seddik M.

    2015-03-12

    In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. In this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.

  7. Implementation of spot scanning dose optimization and dose calculation for helium ions in Hyperion

    SciTech Connect

    Fuchs, Hermann; Schreiner, Thomas; Georg, Dietmar

    2015-09-15

    Purpose: Helium ions ({sup 4}He) may supplement current particle beam therapy strategies as they possess advantages in physical dose distribution over protons. To assess potential clinical advantages, a dose calculation module accounting for relative biological effectiveness (RBE) was developed and integrated into the treatment planning system Hyperion. Methods: Current knowledge on RBE of {sup 4}He together with linear energy transfer considerations motivated an empirical depth-dependent “zonal” RBE model. In the plateau region, a RBE of 1.0 was assumed, followed by an increasing RBE up to 2.8 at the Bragg-peak region, which was then kept constant over the fragmentation tail. To account for a variable proton RBE, the same model concept was also applied to protons with a maximum RBE of 1.6. Both RBE models were added to a previously developed pencil beam algorithm for physical dose calculation and included into the treatment planning system Hyperion. The implementation was validated against Monte Carlo simulations within a water phantom using γ-index evaluation. The potential benefits of {sup 4}He based treatment plans were explored in a preliminary treatment planning comparison (against protons) for four treatment sites, i.e., a prostate, a base-of-skull, a pediatric, and a head-and-neck tumor case. Separate treatment plans taking into account physical dose calculation only or using biological modeling were created for protons and {sup 4}He. Results: Comparison of Monte Carlo and Hyperion calculated doses resulted in a γ{sub mean} of 0.3, with 3.4% of the values above 1 and γ{sub 1%} of 1.5 and better. Treatment plan evaluation showed comparable planning target volume coverage for both particles, with slightly increased coverage for {sup 4}He. Organ at risk (OAR) doses were generally reduced using {sup 4}He, some by more than to 30%. Improvements of {sup 4}He over protons were more pronounced for treatment plans taking biological effects into account. All

  8. Microscopically-Based Energy Density Functionals for Nuclei Using the Density Matrix Expansion. I: Implementation and Pre-Optimization

    SciTech Connect

    Stoitsov, M. V.; Kortelainen, Erno M; Bogner, S. K.; Duguet, T.; Furnstahl, R. J.; Gebremariam, B.; Schunck, N.

    2010-01-01

    In a recent series of papers, Gebremariam, Bogner, and Duguet derived a microscopically-based nuclear energy density functional by applying the Density Matrix Expansion (DME) to the Hartree-Fock energy obtained from chiral effective field theory (EFT) two- and three-nucleon interactions. Due to the structure of the chiral interactions, each coupling in the DME functional is given as the sum of a coupling constant arising from zero-range contact interactions and a coupling function of the density arising from the finite-range pion exchanges. Since the contact contributions have essentially the same structure as those entering empirical Skyrme functionals, a microscopically guided Skyrme phenomenology has been suggested in which the contact terms in the DME functional are released for optimization to finite-density observables to capture short-range correlation energy contributions from beyond Hartree-Fock. The present paper is the first attempt to assess the ability of the newly suggested DME functional, which has a much richer set of density dependencies than traditional Skyrme functionals, to generate sensible and stable results for nuclear applications. The results of the first proof-of-principle calculations are given, and numerous practical issues related to the implementation of the new functional in existing Skyrme codes are discussed. Using a restricted singular value decomposition (SVD) optimization procedure, it is found that the new DME functional gives numerically stable results and exhibits a small but systematic reduction in {chi}^{2} compared to standard Skyrme functionals, thus justifying its suitability for future global optimizations and large-scale calculations.

  9. Microscopically based energy density functionals for nuclei using the density matrix expansion: Implementation and pre-optimization

    SciTech Connect

    Stoitsov, M.; Kortelainen, M.; Schunck, N.; Bogner, S. K.; Gebremariam, B.; Duguet, T.

    2010-11-15

    In a recent series of articles, Gebremariam, Bogner, and Duguet derived a microscopically based nuclear energy density functional by applying the density matrix expansion (DME) to the Hartree-Fock energy obtained from chiral effective field theory two- and three-nucleon interactions. Owing to the structure of the chiral interactions, each coupling in the DME functional is given as the sum of a coupling constant arising from zero-range contact interactions and a coupling function of the density arising from the finite-range pion exchanges. Because the contact contributions have essentially the same structure as those entering empirical Skyrme functionals, a microscopically guided Skyrme phenomenology has been suggested in which the contact terms in the DME functional are released for optimization to finite-density observables to capture short-range correlation energy contributions from beyond Hartree-Fock. The present article is the first attempt to assess the ability of the newly suggested DME functional, which has a much richer set of density dependencies than traditional Skyrme functionals, to generate sensible and stable results for nuclear applications. The results of the first proof-of-principle calculations are given, and numerous practical issues related to the implementation of the new functional in existing Skyrme codes are discussed. Using a restricted singular value decomposition optimization procedure, it is found that the new DME functional gives numerically stable results and exhibits a small but systematic reduction of our test {chi}{sup 2} function compared to standard Skyrme functionals, thus justifying its suitability for future global optimizations and large-scale calculations.

  10. Analytical and test equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation is presented of innovations in testing and measuring technology for both the laboratory and industry. Topics discussed include spectrometers, radiometers, and descriptions of analytical and test equipment in several areas including thermodynamics, fluid flow, electronics, and materials testing.

  11. A Compilation of Internship Reports - 2012

    SciTech Connect

    Stegman M.; Morris, M.; Blackburn, N.

    2012-08-08

    This compilation documents all research project undertaken by the 2012 summer Department of Energy - Workforce Development for Teachers and Scientists interns during their internship program at Brookhaven National Laboratory.

  12. Testing methods and techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Mechanical testing techniques, electrical and electronics testing techniques, thermal testing techniques, and optical testing techniques are the subject of the compilation which provides technical information and illustrations of advanced testing devices. Patent information is included where applicable.

  13. Extension of Alvis compiler front-end

    NASA Astrophysics Data System (ADS)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr

    2015-12-01

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters' types and operations on them. Thanks to the compiler's modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  14. Systems test facilities existing capabilities compilation

    NASA Technical Reports Server (NTRS)

    Weaver, R.

    1981-01-01

    Systems test facilities (STFS) to test total photovoltaic systems and their interfaces are described. The systems development (SD) plan is compilation of existing and planned STFs, as well as subsystem and key component testing facilities. It is recommended that the existing capabilities compilation is annually updated to provide and assessment of the STF activity and to disseminate STF capabilities, status and availability to the photovoltaics program.

  15. Electronic circuits for communications systems: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The compilation of electronic circuits for communications systems is divided into thirteen basic categories, each representing an area of circuit design and application. The compilation items are moderately complex and, as such, would appeal to the applications engineer. However, the rationale for the selection criteria was tailored so that the circuits would reflect fundamental design principles and applications, with an additional requirement for simplicity whenever possible.

  16. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  17. Compiling high-level languages for configurable computers: applying lessons from heterogeneous processing

    NASA Astrophysics Data System (ADS)

    Weaver, Glen E.; Weems, Charles C.; McKinley, Kathryn S.

    1996-10-01

    Configurable systems offer increased performance by providing hardware that matches the computational structure of a problem. This hardware is currently programmed with CAD tools and explicit library calls. To attain widespread acceptance, configurable computing must become transparently accessible from high-level programming languages, but the changeable nature of the target hardware presents a major challenge to traditional compiler technology. A compiler for a configurable computer should optimize the use of functions embedded in hardware and schedule hardware reconfigurations. The hurdles to be overcome in achieving this capability are similar in some ways to those facing compilation for heterogeneous systems. For example, current traditional compilers have neither an interface to accept new primitive operators, nor a mechanism for applying optimizations to new operators. We are building a compiler for heterogeneous computing, called Scale, which replaces the traditional monolithic compiler architecture with a flexible framework. Scale has three main parts: translation director, compilation library, and a persistent store which holds our intermediate representation as well as other data structures. The translation director exploits the framework's flexibility by using architectural information to build a plan to direct each compilation. The translation library serves as a toolkit for use by the translation director. Our compiler intermediate representation, Score, facilities the addition of new IR nodes by distinguishing features used in defining nodes from properties on which transformations depend. In this paper, we present an overview of the scale architecture and its capabilities for dealing with heterogeneity, followed by a discussion of how those capabilities apply to problems in configurable computing. We then address aspects of configurable computing that are likely to require extensions to our approach and propose some extensions.

  18. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  19. Compiled MPI: Cost-Effective Exascale Applications Development

    SciTech Connect

    Bronevetsky, G; Quinlan, D; Lumsdaine, A; Hoefler, T

    2012-04-10

    's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.

  20. Analysis, optimization, and implementation of a hybrid DS/FFH spread-spectrum technique for smart grid communications

    DOE PAGES

    Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; ...

    2015-03-12

    In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less

  1. AC Optimal Power Flow

    SciTech Connect

    2016-10-04

    In this work, we have implemented and developed the simulation software to implement the mathematical model of an AC Optimal Power Flow (OPF) problem. The objective function is to minimize the total cost of generation subject to constraints of node power balance (both real and reactive) and line power flow limits (MW, MVAr, and MVA). We have currently implemented the polar coordinate version of the problem. In the present work, we have used the optimization solver, Knitro (proprietary and not included in this software) to solve the problem and we have kept option for both the native numerical derivative evaluation (working satisfactorily now) as well as for analytical formulas corresponding to the derivatives being provided to Knitro (currently, in the debugging stage). Since the AC OPF is a highly non-convex optimization problem, we have also kept the option for a multistart solution. All of these can be decided by the user during run-time in an interactive manner. The software has been developed in C++ programming language, running with GCC compiler on a Linux machine. We have tested for satisfactory results against Matpower for the IEEE 14 bus system.

  2. Compilation of data on elementary particles

    SciTech Connect

    Trippe, T.G.

    1984-09-01

    The most widely used data compilation in the field of elementary particle physics is the Review of Particle Properties. The origin, development and current state of this compilation are described with emphasis on the features which have contributed to its success: active involvement of particle physicists; critical evaluation and review of the data; completeness of coverage; regular distribution of reliable summaries including a pocket edition; heavy involvement of expert consultants; and international collaboration. The current state of the Review and new developments such as providing interactive access to the Review's database are described. Problems and solutions related to maintaining a strong and supportive relationship between compilation groups and the researchers who produce and use the data are discussed.

  3. Extension of Alvis compiler front-end

    SciTech Connect

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr E-mail: mszpyrka@agh.edu.pl

    2015-12-31

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  4. A small evaluation suite for Ada compilers

    NASA Technical Reports Server (NTRS)

    Wilke, Randy; Roy, Daniel M.

    1986-01-01

    After completing a small Ada pilot project (OCC simulator) for the Multi Satellite Operations Control Center (MSOCC) at Goddard last year, the use of Ada to develop OCCs was recommended. To help MSOCC transition toward Ada, a suite of about 100 evaluation programs was developed which can be used to assess Ada compilers. These programs compare the overall quality of the compilation system, compare the relative efficiencies of the compilers and the environments in which they work, and compare the size and execution speed of generated machine code. Another goal of the benchmark software was to provide MSOCC system developers with rough timing estimates for the purpose of predicting performance of future systems written in Ada.

  5. Compilation and Environment Optimizations for LogLisp.

    DTIC Science & Technology

    1984-07-01

    Chief, Comand & Control Division .,! FOR THE COMER: DONALD A. BRANTINGRAM Plans Office If your address has changed or if you wish to be removed from the...Moore. "The Sharing of Structure in Theorem-proving Programs," in B. Meltzer and D. Michie (eds), Machine Intelligence VII, John Wiley. [Bruynooghe 821

  6. Machine tools and fixtures: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    As part of NASA's Technology Utilizations Program, a compilation was made of technological developments regarding machine tools, jigs, and fixtures that have been produced, modified, or adapted to meet requirements of the aerospace program. The compilation is divided into three sections that include: (1) a variety of machine tool applications that offer easier and more efficient production techniques; (2) methods, techniques, and hardware that aid in the setup, alignment, and control of machines and machine tools to further quality assurance in finished products: and (3) jigs, fixtures, and adapters that are ancillary to basic machine tools and aid in realizing their greatest potential.

  7. COMPILATION OF CURRENT HIGH ENERGY PHYSICS EXPERIMENTS

    SciTech Connect

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.; Horne, C.P.; Hutchinson, M.S.; Rittenberg, A.; Trippe, T.G.; Yost, G.P.; Addis, L.; Ward, C.E.W.; Baggett, N.; Goldschmidt-Clermong, Y.; Joos, P.; Gelfand, N.; Oyanagi, Y.; Grudtsin, S.N.; Ryabov, Yu.G.

    1981-05-01

    This is the fourth edition of our compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about April 1981, and (2) had not completed taking of data by 1 January 1977. We emphasize that only approved experiments are included.

  8. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  9. Runtime support and compilation methods for user-specified data distributions

    NASA Technical Reports Server (NTRS)

    Ponnusamy, Ravi; Saltz, Joel; Choudhury, Alok; Hwang, Yuan-Shin; Fox, Geoffrey

    1993-01-01

    This paper describes two new ideas by which an HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow use of program arrays to describe graph connectivity, spatial location of array elements, and computational load. The second mechanism is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed information from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance results for these mechanisms from a Fortran 90D compiler implementation.

  10. Proving Correctness for Pointer Programs in a Verifying Compiler

    NASA Technical Reports Server (NTRS)

    Kulczycki, Gregory; Singh, Amrinder

    2008-01-01

    This research describes a component-based approach to proving the correctness of programs involving pointer behavior. The approach supports modular reasoning and is designed to be used within the larger context of a verifying compiler. The approach consists of two parts. When a system component requires the direct manipulation of pointer operations in its implementation, we implement it using a built-in component specifically designed to capture the functional and performance behavior of pointers. When a system component requires pointer behavior via a linked data structure, we ensure that the complexities of the pointer operations are encapsulated within the data structure and are hidden to the client component. In this way, programs that rely on pointers can be verified modularly, without requiring special rules for pointers. The ultimate objective of a verifying compiler is to prove-with as little human intervention as possible-that proposed program code is correct with respect to a full behavioral specification. Full verification for software is especially important for an agency like NASA that is routinely involved in the development of mission critical systems.

  11. System, apparatus and methods to implement high-speed network analyzers

    DOEpatents

    Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E

    2015-11-10

    Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.

  12. Optimizing Suicide Prevention Programs and Their Implementation in Europe (OSPI Europe): an evidence-based multi-level approach

    PubMed Central

    2009-01-01

    Background Suicide and non-fatal suicidal behaviour are significant public health issues in Europe requiring effective preventive interventions. However, the evidence for effective preventive strategies is scarce. The protocol of a European research project to develop an optimized evidence based program for suicide prevention is presented. Method The groundwork for this research has been established by a regional community based intervention for suicide prevention that focuses on improving awareness and care for depression performed within the European Alliance Against Depression (EAAD). The EAAD intervention consists of (1) training sessions and practice support for primary care physicians,(2) public relations activities and mass media campaigns, (3) training sessions for community facilitators who serve as gatekeepers for depressed and suicidal persons in the community and treatment and (4) outreach and support for high risk and self-help groups (e.g. helplines). The intervention has been shown to be effective in reducing suicidal behaviour in an earlier study, the Nuremberg Alliance Against Depression. In the context of the current research project described in this paper (OSPI-Europe) the EAAD model is enhanced by other evidence based interventions and implemented simultaneously and in standardised way in four regions in Ireland, Portugal, Hungary and Germany. The enhanced intervention will be evaluated using a prospective controlled design with the primary outcomes being composite suicidal acts (fatal and non-fatal), and with intermediate outcomes being the effect of training programs, changes in public attitudes, guideline-consistent media reporting. In addition an analysis of the economic costs and consequences will be undertaken, while a process evaluation will monitor implementation of the interventions within the different regions with varying organisational and healthcare contexts. Discussion This multi-centre research seeks to overcome major challenges

  13. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  14. Note on Conditional Compilation in Standard ML

    DTIC Science & Technology

    1993-06-01

    eOmputer-Science No-te on Coridhitiom Cominliati"I~n Standard ML1 Nicholas Haines Edoardo Biagioni Robert Hiarper mom Brian G. Mimnes June 1993 CMU...CS-93. 11 TIC ELECTE f 00..7733 %goo~~OO Note on Conditioual Compilation in Standard ML Nicholas Haines Edoardo Biagioni Robert Harper Brian G. Milnes

  15. Electronic test and calibration circuits, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.

  16. Heat Transfer and Thermodynamics: a Compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Studies include theories and mechanical considerations in the transfer of heat and the thermodynamic properties of matter and the causes and effects of certain interactions.

  17. Electronic switches and control circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The innovations in this updated series of compilations dealing with electronic technology represents a carefully selected collection of items on electronic switches and control circuits. Most of the items are based on well-known circuit design concepts that have been simplified or refined to meet NASA's demanding requirement for reliability, simplicity, fail-safe characteristics, and the capability of withstanding environmental extremes.

  18. Optimization of multisource information fusion for resource management with remote sensing imagery: an aggregate regularization method with neural network implementation

    NASA Astrophysics Data System (ADS)

    Shkvarko, Yuriy, IV; Butenko, Sergiy

    2006-05-01

    We address a new approach to the problem of improvement of the quality of multi-grade spatial-spectral images provided by several remote sensing (RS) systems as required for environmental resource management with the use of multisource RS data. The problem of multi-spectral reconstructive imaging with multisource information fusion is stated and treated as an aggregated ill-conditioned inverse problem of reconstruction of a high-resolution image from the data provided by several sensor systems that employ the same or different image formation methods. The proposed fusionoptimization technique aggregates the experiment design regularization paradigm with neural-network-based implementation of the multisource information fusion method. The maximum entropy (ME) requirement and projection regularization constraints are posed as prior knowledge for fused reconstruction and the experiment-design regularization methodology is applied to perform the optimization of multisource information fusion. Computationally, the reconstruction and fusion are accomplished via minimization of the energy function of the proposed modified multistate Hopfield-type neural network (NN) that integrates the model parameters of all systems incorporating a priori information, aggregate multisource measurements and calibration data. The developed theory proves that the designed maximum entropy neural network (MENN) is able to solve the multisource fusion tasks without substantial complication of its computational structure independent on the number of systems to be fused. For each particular case, only the proper adjustment of the MENN's parameters (i.e. interconnection strengths and bias inputs) should be accomplished. Simulation examples are presented to illustrate the good overall performance of the fused reconstruction achieved with the developed MENN algorithm applied to the real-world multi-spectral environmental imagery.

  19. Using MaxCompiler for the high level synthesis of trigger algorithms

    NASA Astrophysics Data System (ADS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  20. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  1. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  2. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  3. Implementation of an optimal stomatal conductance scheme in the Australian Community Climate Earth Systems Simulator (ACCESS1.3b)

    NASA Astrophysics Data System (ADS)

    Kala, J.; De Kauwe, M. G.; Pitman, A. J.; Lorenz, R.; Medlyn, B. E.; Wang, Y.-P.; Lin, Y.-S.; Abramowitz, G.

    2015-12-01

    We implement a new stomatal conductance scheme, based on the optimality approach, within the Community Atmosphere Biosphere Land Exchange (CABLEv2.0.1) land surface model. Coupled land-atmosphere simulations are then performed using CABLEv2.0.1 within the Australian Community Climate and Earth Systems Simulator (ACCESSv1.3b) with prescribed sea surface temperatures. As in most land surface models, the default stomatal conductance scheme only accounts for differences in model parameters in relation to the photosynthetic pathway but not in relation to plant functional types. The new scheme allows model parameters to vary by plant functional type, based on a global synthesis of observations of stomatal conductance under different climate regimes over a wide range of species. We show that the new scheme reduces the latent heat flux from the land surface over the boreal forests during the Northern Hemisphere summer by 0.5-1.0 mm day-1. This leads to warmer daily maximum and minimum temperatures by up to 1.0 °C and warmer extreme maximum temperatures by up to 1.5 °C. These changes generally improve the climate model's climatology of warm extremes and improve existing biases by 10-20 %. The bias in minimum temperatures is however degraded but, overall, this is outweighed by the improvement in maximum temperatures as there is a net improvement in the diurnal temperature range in this region. In other regions such as parts of South and North America where ACCESSv1.3b has known large positive biases in both maximum and minimum temperatures (~ 5 to 10 °C), the new scheme degrades this bias by up to 1 °C. We conclude that, although several large biases remain in ACCESSv1.3b for temperature extremes, the improvements in the global climate model over large parts of the boreal forests during the Northern Hemisphere summer which result from the new stomatal scheme, constrained by a global synthesis of experimental data, provide a valuable advance in the long-term development

  4. Compilation of DNA sequences of Escherichia coli

    PubMed Central

    Kröger, Manfred

    1989-01-01

    We have compiled the DNA sequence data for E.coli K12 available from the GENBANK and EMBO databases and over a period of several years independently from the literature. We have introduced all available genetic map data and have arranged the sequences accordingly. As far as possible the overlaps are deleted and a total of 940,449 individual bp is found to be determined till the beginning of 1989. This corresponds to a total of 19.92% of the entire E.coli chromosome consisting of about 4,720 kbp. This number may actually be higher by some extra 2% derived from the sequence of lysogenic bacteriophage lambda and the various insertion sequences. This compilation may be available in machine readable form from one of the international databanks in some future. PMID:2654890

  5. Compilation of Physicochemical and Toxicological Information ...

    EPA Pesticide Factsheets

    The purpose of this product is to make accessible the information about the 1,173 hydraulic fracturing-related chemicals that were listed in the external review draft of the Hydraulic Fracturing Drinking Water Assessment that was released recently. The product consists of a series of spreadsheets with physicochemical and toxicological information pulled from several sources of information, including: EPI Suite, LeadScope, QikiProp, Reaxys, IRIS, PPRTV, ATSDR, among other sources. The spreadsheets also contain background information about how the list of chemicals were compiled, what the different sources of chemical information are, and definitions and descriptions of the values presented. The purpose of this product is to compile and make accessible information about the 1,173 hydraulic fracturing-related chemicals listed in the external review draft of the Hydraulic Fracturing Drinking Water Assessment.

  6. A compiler and validator for flight operations on NASA space missions

    NASA Astrophysics Data System (ADS)

    Fonte, Sergio; Politi, Romolo; Capria, Maria Teresa; Giardino, Marco; De Sanctis, Maria Cristina

    2016-07-01

    In NASA missions the management and the programming of the flight systems is performed by a specific scripting language, the SASF (Spacecraft Activity Sequence File). In order to perform a check on the syntax and grammar it is necessary a compiler that stress the errors (eventually) found in the sequence file produced for an instrument on board the flight system. In our experience on Dawn mission, we developed VIRV (VIR Validator), a tool that performs checks on the syntax and grammar of SASF, runs a simulations of VIR acquisitions and eventually finds violation of the flight rules of the sequences produced. The project of a SASF compiler (SSC - Spacecraft Sequence Compiler) is ready to have a new implementation: the generalization for different NASA mission. In fact, VIRV is a compiler for a dialect of SASF; it includes VIR commands as part of SASF language. Our goal is to produce a general compiler for the SASF, in which every instrument has a library to be introduced into the compiler. The SSC can analyze a SASF, produce a log of events, perform a simulation of the instrument acquisition and check the flight rules for the instrument selected. The output of the program can be produced in GRASS GIS format and may help the operator to analyze the geometry of the acquisition.

  7. NPDES CAFO Regulations Implementation Status Reports

    EPA Pesticide Factsheets

    EPA compiles annual summaries on the implementation status of the NPDES CAFO regulations. Reports include, for each state: total number of CAFOs, number and percentage of CAFOs with NPDES permits, and other information associated with implementation of the

  8. 1991 OCRWM bulletin compilation and index

    SciTech Connect

    1992-05-01

    The OCRWM Bulletin is published by the Department of Energy, Office of Civilian Radioactive Waste Management, to provide current information about the national program for managing spent fuel and high-level radioactive waste. The document is a compilation of issues from the 1991 calendar year. A table of contents and an index have been provided to reference information contained in this year`s Bulletins.

  9. Criteria for Evaluating the Performance of Compilers

    DTIC Science & Technology

    1974-10-01

    benefits. The only research method that suggests itself for assigning a dollar benefit valuation to these features is an elaborate series of psychological ...program. The study of the second question estab- lished criteria for defining a "compiler Gibson mix", and established methods for using this "mix" to...6 for other use, the regise,,r is simply reassigned and the value forgotten. The second method involve. using a simpler procedure requiring less

  10. Current status of the HAL/S compiler on the Modcomp classic 7870 computer

    NASA Technical Reports Server (NTRS)

    Lytle, P. J.

    1981-01-01

    A brief history of the HAL/S language, including the experience of other users of the language at the Jet Propulsion Laboratory is presented. The current status of the compiler, as implemented on the Modcomp 7870 Classi computer, and future applications in the Deep Space Network (DSN) are discussed. The primary applications in the DSN will be in the Mark IVA network.

  11. Efficient RTL-based code generation for specified DSP C-compiler

    NASA Astrophysics Data System (ADS)

    Pan, Qiaohai; Liu, Peng; Shi, Ce; Yao, Qingdong; Zhu, Shaobo; Yan, Li; Zhou, Ying; Huang, Weibing

    2001-12-01

    A C-compiler is a basic tool for most embedded systems programmers. It is the tool by which the ideas and algorithms in your application (expressed as C source code) are transformed into machine code executable by the target processor. Our research was to develop an optimizing C-compiler for a specified 16-bit DSP. As one of the most important part in the C-compiler, Code Generation's efficiency and performance directly affect to the resultant target assembly code. Thus, in order to improve the performance of the C-compiler, we constructed an efficient code generation based on RTL, an intermediate language used in GNU CC. The code generation accepts RTL as main input, takes good advantage of features specific to RTL and specified DSP's architecture, and generates compact assembly code of the specified DSP. In this paper, firstly, the features of RTL will be briefly introduced. Then, the basic principle of constructing the code generation will be presented in detail. According to the basic principle, this paper will discuss the architecture of the code generation, including: syntax tree construction / reconstruction, basic RTL instruction extraction, behavior description at RTL level, and instruction description at assembly level. The optimization strategies used in the code generation for generating compact assembly code will also be given in this paper. Finally, we will achieve the conclusion that the C-compiler using this special code generation achieved high efficiency we expected.

  12. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  13. Dual compile strategy for parallel heterogeneous execution.

    SciTech Connect

    Smith, Tyler Barratt; Perry, James Thomas

    2012-06-01

    The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work at the same time.

  14. Piping and tubing technology: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A compilation on the devices, techniques, and methods used in piping and tubing technology is presented. Data cover the following: (1) a number of fittings, couplings, and connectors that are useful in joining tubing and piping and various systems, (2) a family of devices used where flexibility and/or vibration damping are necessary, (3) a number of devices found useful in the regulation and control of fluid flow, and (4) shop hints to aid in maintenance and repair procedures such as cleaning, flaring, and swaging of tubes.

  15. Digital circuits for computer applications: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The innovations in this updated series of compilations dealing with electronic technology represent a carefully selected collection of digital circuits which have direct application in computer oriented systems. In general, the circuits have been selected as representative items of each section and have been included on their merits of having universal applications in digital computers and digital data processing systems. As such, they should have wide appeal to the professional engineer and scientist who encounter the fundamentals of digital techniques in their daily activities. The circuits are grouped as digital logic circuits, analog to digital converters, and counters and shift registers.

  16. HAL/S-360 compiler system specification

    NASA Technical Reports Server (NTRS)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  17. A Framework for an Automated Compilation System for Reconfigurable Architectures

    DTIC Science & Technology

    1997-03-01

    C Source for a Simple Bit Reversal Program ............................................... 60 Figure 8: Optimized Assembly Code for Bit Reversal Loop...67 Figure 12: Source Code for a Software Function Identified for Hardware Implementation73...176 Figure 42: Source Code for the Dilation Filter in the IRMW Application ..................... 178 Figure 43: Source Code

  18. A simple way to build an ANSI-C like compiler from scratch and embed it on the instrument's software

    NASA Astrophysics Data System (ADS)

    Rodríguez Trinidad, Alicia; Morales Muñoz, Rafael; Abril Martí, Miguel; Costillo Iciarra, Luis Pedro; Cárdenas Vázquez, M. C.; Rabaza Castillo, Ovidio; Ramón Ballesta, Alejandro; Sánchez Carrasco, Miguel A.; Becerril Jarque, Santiago; Amado González, Pedro J.

    2010-07-01

    This paper examines the reasons for building a compiled language embedded on an instrument software. Starting from scratch and step by step, all the compiler stages of an ANSI-C like language are analyzed, simplified and implemented. The result is a compiler and a runner with a small footprint that can be easily transferable and embedded into an instrument software. Both have about 75 KBytes when similar solutions have hundreds. Finally, the possibilities that arise from embedding the runner inside an instrument software are explored.

  19. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  20. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  1. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  2. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  3. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  4. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  5. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 2 2014-04-01 2014-04-01 false Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  6. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  7. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  8. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  9. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB.

    PubMed

    Biyikli, Emre; To, Albert C

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org.

  10. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  11. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  12. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  13. Non-vitamin K antagonist oral anticoagulants and atrial fibrillation guidelines in practice: barriers to and strategies for optimal implementation.

    PubMed

    Camm, A John; Pinto, Fausto J; Hankey, Graeme J; Andreotti, Felicita; Hobbs, F D Richard

    2015-07-01

    Stroke is a leading cause of morbidity and mortality worldwide. Atrial fibrillation (AF) is an independent risk factor for stroke, increasing the risk five-fold. Strokes in patients with AF are more likely than other embolic strokes to be fatal or cause severe disability and are associated with higher healthcare costs, but they are also preventable. Current guidelines recommend that all patients with AF who are at risk of stroke should receive anticoagulation. However, despite this guidance, registry data indicate that anticoagulation is still widely underused. With a focus on the 2012 update of the European Society of Cardiology (ESC) guidelines for the management of AF, the Action for Stroke Prevention alliance writing group have identified key reasons for the suboptimal implementation of the guidelines at a global, regional, and local level, with an emphasis on access restrictions to guideline-recommended therapies. Following identification of these barriers, the group has developed an expert consensus on strategies to augment the implementation of current guidelines, including practical, educational, and access-related measures. The potential impact of healthcare quality measures for stroke prevention on guideline implementation is also explored. By providing practical guidance on how to improve implementation of the ESC guidelines, or region-specific modifications of these guidelines, the aim is to reduce the potentially devastating impact that stroke can have on patients, their families and their carers.

  14. Non-vitamin K antagonist oral anticoagulants and atrial fibrillation guidelines in practice: barriers to and strategies for optimal implementation

    PubMed Central

    Camm, A. John; Pinto, Fausto J.; Hankey, Graeme J.; Andreotti, Felicita; Hobbs, F.D. Richard

    2015-01-01

    Stroke is a leading cause of morbidity and mortality worldwide. Atrial fibrillation (AF) is an independent risk factor for stroke, increasing the risk five-fold. Strokes in patients with AF are more likely than other embolic strokes to be fatal or cause severe disability and are associated with higher healthcare costs, but they are also preventable. Current guidelines recommend that all patients with AF who are at risk of stroke should receive anticoagulation. However, despite this guidance, registry data indicate that anticoagulation is still widely underused. With a focus on the 2012 update of the European Society of Cardiology (ESC) guidelines for the management of AF, the Action for Stroke Prevention alliance writing group have identified key reasons for the suboptimal implementation of the guidelines at a global, regional, and local level, with an emphasis on access restrictions to guideline-recommended therapies. Following identification of these barriers, the group has developed an expert consensus on strategies to augment the implementation of current guidelines, including practical, educational, and access-related measures. The potential impact of healthcare quality measures for stroke prevention on guideline implementation is also explored. By providing practical guidance on how to improve implementation of the ESC guidelines, or region-specific modifications of these guidelines, the aim is to reduce the potentially devastating impact that stroke can have on patients, their families and their carers. PMID:26116685

  15. ROSE: Compiler Support for Object-Oriented Frameworks

    SciTech Connect

    Qainlant, D.

    1999-11-17

    ROSE is a preprocessor generation tool for the support of compile time performance optimizations in Overture. The Overture framework is an object-oriented environment for solving partial differential equations in two and three space dimensions. It is a collection of C++ libraries that enables the use of finite difference and finite volume methods at a level that hides the details of the associated data structures. Overture can be used to solve problems in complicated, moving geometries using the method of overlapping grids. It has support for grid generation, difference operators, boundary conditions, database access and graphics. In this paper we briefly present Overture, and discuss our approach toward performance within Overture and the A++P++ array class abstractions upon which Overture depends, this work represents some of the newest work in Overture. The results we present show that the abstractions represented within Overture and the A++P++ array class library can be used to obtain application codes with performance equivalent to that of optimized C and Fortran 77. ROSE, the preprocessor generation tool, is general in its application to any object-oriented framework or application and is not specific to Overture.

  16. Compilation of requests for nuclear data

    SciTech Connect

    Weston, L.W.; Larson, D.C.

    1993-02-01

    This compilation represents the current needs for nuclear data measurements and evaluations as expressed by interested fission and fusion reactor designers, medical users of nuclear data, nuclear data evaluators, CSEWG members and other interested parties. The requests and justifications are reviewed by the Data Request and Status Subcommittee of CSEWG as well as most of the general CSEWG membership. The basic format and computer programs for the Request List were produced by the National Nuclear Data Center (NNDC) at Brookhaven National Laboratory. The NNDC produced the Request List for many years. The Request List is compiled from a computerized data file. Each request has a unique isotope, reaction type, requestor and identifying number. The first two digits of the identifying number are the year in which the request was initiated. Every effort has been made to restrict the notations to those used in common nuclear physics textbooks. Most requests are for individual isotopes as are most ENDF evaluations, however, there are some requests for elemental measurements. Each request gives a priority rating which will be discussed in Section 2, the neutron energy range for which the request is made, the accuracy requested in terms of one standard deviation, and the requested energy resolution in terms of one standard deviation. Also given is the requestor with the comments which were furnished with the request. The addresses and telephone numbers of the requestors are given in Appendix 1. ENDF evaluators who may be contacted concerning evaluations are given in Appendix 2. Experimentalists contemplating making one of the requested measurements are encouraged to contact both the requestor and evaluator who may provide valuable information. This is a working document in that it will change with time. New requests or comments may be submitted to the editors or a regular CSEWG member at any time.

  17. Fringe pattern demodulation using the one-dimensional continuous wavelet transform: field-programmable gate array implementation.

    PubMed

    Abid, Abdulbasit

    2013-03-01

    This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.

  18. On-line re-optimization of prostate IMRT plan for adaptive radiation therapy: A feasibility study and implementation

    NASA Astrophysics Data System (ADS)

    Thongphiew, Danthai

    Prostate cancer is a disease that affected approximately 200,000 men in United States in 2006. Radiation therapy is a non invasive treatment option for this disease and is highly effective. The goal of radiation therapy is to deliver the prescription dose to the tumor (prostate) while sparing the surrounding healthy organs (e.g. bladder, rectum, and femoral heads). One limitation of the radiation therapy is organ position and shape variation from day to day. These variations could be as large as half inch. The conventional solution to this problem is to include some margins surrounding the target when plan the treatment. The development of image guided radiation therapy technique allows in-room correction which potentially eliminates the patient setup error however the uncertainty due to organ deformation still remains. Performing a plan re-optimization will take about half hour which is infeasible to perform an online correction. A technique of performing online re-optimization of intensity modulated radiation therapy is developed for adaptive radiation therapy of prostate cancer. The technique is capable of correction both organ positioning and shape changes within a few minutes. The proposed technique involves (1) 3D on-board imaging of daily anatomy, (2) registering the daily images with original planning CT images and mapping the original dose distribution to the daily anatomy, (3) real time re-optimization of the plan. Finally the leaf sequences are calculated for the treatment delivery. The feasibility of this online adaptive radiation therapy scheme was evaluated by clinical cases. The results demonstrate that it is feasible to perform online re-optimization of the original plan when large position or shape variation occurs.

  19. Implementation of a genetically tuned neural platform in optimizing fluorescence from receptor-ligand binding interactions on microchips.

    PubMed

    Alvarado, Judith; Hanrahan, Grady; Nguyen, Huong T H; Gomez, Frank A

    2012-09-01

    This paper describes the use of a genetically tuned neural network platform to optimize the fluorescence realized upon binding 5-carboxyfluorescein-D-Ala-D-Ala-D-Ala (5-FAM-(D-Ala)(3) ) (1) to the antibiotic teicoplanin from Actinoplanes teichomyceticus electrostatically attached to a microfluidic channel originally modified with 3-aminopropyltriethoxysilane. Here, three parameters: (i) the length of time teicoplanin was in the microchannel; (ii) the length of time 1 was in the microchannel, thereby, in equilibrium with teicoplanin, and; (iii) the amount of time buffer was flushed through the microchannel to wash out any unbound 1 remaining in the channel, are examined at a constant concentration of 1, with neural network methodology applied to optimize fluorescence. Optimal neural structure provided a best fit model, both for the training set (r(2) = 0.985) and testing set (r(2) = 0.967) data. Simulated results were experimentally validated demonstrating efficiency of the neural network approach and proved superior to the use of multiple linear regression and neural networks using standard back propagation.

  20. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    SciTech Connect

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing and implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.

  1. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  2. Implementation and performance of the pseudoknot problem in sisal

    SciTech Connect

    Feo, J.; Ivory, M.

    1994-12-01

    The Pseudoknot Problem is an application from molecular biology that computes all possible three-dimensional structures of one section of a nucleic acid molecule. The problem spans two important application domains: it includes a deterministic, backtracking search algorithm and floating-point intensive computations. Recently, the application has been used to compare and to contrast functional languages. In this paper, we describe a sequential and parallel implementation of the problem in Sisal. We present a method for writing recursive, floating-point intensive applications in Sisal that preserves performance and parallelism. We discuss compiler optimizations, runtime execution, and performance on several multiprocessor systems.

  3. Route Optimization for Mobile IPV6 Using the Return Routability Procedure: Test Bed Implementation and Security Analysis

    DTIC Science & Technology

    2007-03-01

    Linux [http://www.mipl.mediapoli.com/ Last visited on January 10, 2007]), “ KAME ” project (Mobile IPv6 for BSD based Oss [http://www.kame.net Last...conformance testing events such as the ETSI IPv6 Plugtests and TAHI Interoperability events. The " KAME " and "USAGI", projects are working on research...and development on the implementation of the IPv6 and IPsec protocols, which operates on BSD based OSs for the " KAME " project and on a Linux based

  4. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  5. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    SciTech Connect

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, used for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.

  6. SU-E-T-500: Initial Implementation of GPU-Based Particle Swarm Optimization for 4D IMRT Planning in Lung SBRT

    SciTech Connect

    Modiri, A; Hagan, A; Gu, X; Sawant, A

    2015-06-15

    Purpose 4D-IMRT planning, combined with dynamic MLC tracking delivery, utilizes the temporal dimension as an additional degree of freedom to achieve improved OAR-sparing. The computational complexity for such optimization increases exponentially with increase in dimensionality. In order to accomplish this task in a clinically-feasible time frame, we present an initial implementation of GPU-based 4D-IMRT planning based on particle swarm optimization (PSO). Methods The target and normal structures were manually contoured on ten phases of a 4DCT scan of a NSCLC patient with a 54cm3 right-lower-lobe tumor (1.5cm motion). Corresponding ten 3D-IMRT plans were created in the Eclipse treatment planning system (Ver-13.6). A vendor-provided scripting interface was used to export 3D-dose matrices corresponding to each control point (10 phases × 9 beams × 166 control points = 14,940), which served as input to PSO. The optimization task was to iteratively adjust the weights of each control point and scale the corresponding dose matrices. In order to handle the large amount of data in GPU memory, dose matrices were sparsified and placed in contiguous memory blocks with the 14,940 weight-variables. PSO was implemented on CPU (dual-Xeon, 3.1GHz) and GPU (dual-K20 Tesla, 2496 cores, 3.52Tflops, each) platforms. NiftyReg, an open-source deformable image registration package, was used to calculate the summed dose. Results The 4D-PSO plan yielded PTV coverage comparable to the clinical ITV-based plan and significantly higher OAR-sparing, as follows: lung Dmean=33%; lung V20=27%; spinal cord Dmax=26%; esophagus Dmax=42%; heart Dmax=0%; heart Dmean=47%. The GPU-PSO processing time for 14940 variables and 7 PSO-particles was 41% that of CPU-PSO (199 vs. 488 minutes). Conclusion Truly 4D-IMRT planning can yield significant OAR dose-sparing while preserving PTV coverage. The corresponding optimization problem is large-scale, non-convex and computationally rigorous. Our initial results

  7. The Platform-Aware Compilation Environment (PACE)

    DTIC Science & Technology

    2012-09-01

    considered several strategies and settled on an implementation of the Cytron, Lowry , and Zadeck algorithm [10]. We began an implementation of this...various code regions. To address this issue, we collaborated with Jim Browne (University of Texas) and Martin Burtcher (Texas State University) to develop...The PACE Project provided full or partial support for the following graduate students: 1. Raj Barik (Rice) 2. Thomas Barr (Rice) 3

  8. An optimal scheme for numerical evaluation of Eshelby tensors and its implementation in a MATLAB package for simulating the motion of viscous ellipsoids in slow flows

    NASA Astrophysics Data System (ADS)

    Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.

    2016-11-01

    To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.

  9. Qcompiler: Quantum compilation with the CSD method

    NASA Astrophysics Data System (ADS)

    Chen, Y. G.; Wang, J. B.

    2013-03-01

    In this paper, we present a general quantum computation compiler, which maps any given quantum algorithm to a quantum circuit consisting a sequential set of elementary quantum logic gates based on recursive cosine-sine decomposition. The resulting quantum circuit diagram is provided by directly linking the package output written in LaTeX to Qcircuit.tex . We illustrate the use of the Qcompiler package through various examples with full details of the derived quantum circuits. Besides its accuracy, generality and simplicity, Qcompiler produces quantum circuits with significantly reduced number of gates when the systems under study have a high degree of symmetry. Catalogue identifier: AENX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4321 No. of bytes in distributed program, including test data, etc.: 50943 Distribution format: tar.gz Programming language: Fortran. Computer: Any computer with a Fortran compiler. Operating system: Linux, Mac OS X 10.5 (and later). RAM: Depends on the size of the unitary matrix to be decomposed Classification: 4.15. External routines: Lapack (http://www.netlib.org/lapack/) Nature of problem: Decompose any given unitary operation into a quantum circuit with only elementary quantum logic gates. Solution method: This package decomposes an arbitrary unitary matrix, by applying the CSD algorithm recursively, into a series of block-diagonal matrices, which can then be readily associated with elementary quantum gates to form a quantum circuit. Restrictions: The only limitation is imposed by the available memory on the user's computer. Additional comments: This package is applicable for any arbitrary unitary matrices, both real and complex. If the

  10. Soil erosion evaluation in a rapidly urbanizing city (Shenzhen, China) and implementation of spatial land-use optimization.

    PubMed

    Zhang, Wenting; Huang, Bo

    2015-03-01

    Soil erosion has become a pressing environmental concern worldwide. In addition to such natural factors as slope, rainfall, vegetation cover, and soil characteristics, land-use changes-a direct reflection of human activities-also exert a huge influence on soil erosion. In recent years, such dramatic changes, in conjunction with the increasing trend toward urbanization worldwide, have led to severe soil erosion. Against this backdrop, geographic information system-assisted research on the effects of land-use changes on soil erosion has become increasingly common, producing a number of meaningful results. In most of these studies, however, even when the spatial and temporal effects of land-use changes are evaluated, knowledge of how the resulting data can be used to formulate sound land-use plans is generally lacking. At the same time, land-use decisions are driven by social, environmental, and economic factors and thus cannot be made solely with the goal of controlling soil erosion. To address these issues, a genetic algorithm (GA)-based multi-objective optimization (MOO) approach has been proposed to find a balance among various land-use objectives, including soil erosion control, to achieve sound land-use plans. GA-based MOO offers decision-makers and land-use planners a set of Pareto-optimal solutions from which to choose. Shenzhen, a fast-developing Chinese city that has long suffered from severe soil erosion, is selected as a case study area to validate the efficacy of the GA-based MOO approach for controlling soil erosion. Based on the MOO results, three multiple land-use objectives are proposed for Shenzhen: (1) to minimize soil erosion, (2) to minimize the incompatibility of neighboring land-use types, and (3) to minimize the cost of changes to the status quo. In addition to these land-use objectives, several constraints are also defined: (1) the provision of sufficient built-up land to accommodate a growing population, (2) restrictions on the development of

  11. Ground Operations Aerospace Language (GOAL). Volume 2: Compiler

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.

  12. An optimized DSP implementation of adaptive filtering and ICA for motion artifact reduction in ambulatory ECG monitoring.

    PubMed

    Berset, Torfinn; Geng, Di; Romero, Iñaki

    2012-01-01

    Noise from motion artifacts is currently one of the main challenges in the field of ambulatory ECG recording. To address this problem, we propose the use of two different approaches. First, an adaptive filter with electrode-skin impedance as a reference signal is described. Secondly, a multi-channel ECG algorithm based on Independent Component Analysis is introduced. Both algorithms have been designed and further optimized for real-time work embedded in a dedicated Digital Signal Processor. We show that both algorithms improve the performance of a beat detection algorithm when applied in high noise conditions. In addition, an efficient way of choosing this methods is suggested with the aim of reduce the overall total system power consumption.

  13. Basic circuit compilation techniques for an ion-trap quantum machine

    NASA Astrophysics Data System (ADS)

    Maslov, Dmitri

    2017-02-01

    We study the problem of compilation of quantum algorithms into optimized physical-level circuits executable in a quantum information processing (QIP) experiment based on trapped atomic ions. We report a complete strategy: starting with an algorithm in the form of a quantum computer program, we compile it into a high-level logical circuit that goes through multiple stages of decomposition into progressively lower-level circuits until we reach the physical execution-level specification. We skip the fault-tolerance layer, as it is not within the scope of this work. The different stages are structured so as to best assist with the overall optimization while taking into account numerous optimization criteria, including minimizing the number of expensive two-qubit gates, minimizing the number of less expensive single-qubit gates, optimizing the runtime, minimizing the overall circuit error, and optimizing classical control sequences. Our approach allows a trade-off between circuit runtime and quantum error, as well as to accommodate future changes in the optimization criteria that may likely arise as a result of the anticipated improvements in the physical-level control of the experiment.

  14. Optimization of Surface-Enhanced Raman Spectroscopy Conditions for Implementation into a Microfluidic Device for Drug Detection.

    PubMed

    Kline, Neal D; Tripathi, Ashish; Mirsafavi, Rustin; Pardoe, Ian; Moskovits, Martin; Meinhart, Carl; Guicheteau, Jason A; Christesen, Steven D; Fountain, Augustus W

    2016-11-01

    A microfluidic device is being developed by University of California-Santa Barbara as part of a joint effort with the United States Army to develop a portable, rapid drug detection device. Surface-enhanced Raman spectroscopy (SERS) is used to provide a sensitive, selective detection technique within the microfluidic platform employing metallic nanoparticles as the SERS medium. Using several illicit drugs as analytes, the work presented here describes the efforts of the Edgewood Chemical Biological Center to optimize the microfluidic platform by investigating the role of nanoparticle material, nanoparticle size, excitation wavelength, and capping agents on the performance, and drug concentration detection limits achievable with Ag and Au nanoparticles that will ultimately be incorporated into the final design. This study is particularly important as it lays out a systematic comparison of limits of detection and potential interferences from working with several nanoparticle capping agents-such as tannate, citrate, and borate-which does not seem to have been done previously as the majority of studies only concentrate on citrate as the capping agent. Morphine, cocaine, and methamphetamine were chosen as test analytes for this study and were observed to have limits of detection (LOD) in the range of (1.5-4.7) × 10(-8) M (4.5-13 ng/mL), with the borate capping agent having the best performance.

  15. Implementations of the optimal multigrid algorithm for the cell-centered finite difference on equilateral triangular grids

    SciTech Connect

    Ewing, R.E.; Saevareid, O.; Shen, J.

    1994-12-31

    A multigrid algorithm for the cell-centered finite difference on equilateral triangular grids for solving second-order elliptic problems is proposed. This finite difference is a four-point star stencil in a two-dimensional domain and a five-point star stencil in a three dimensional domain. According to the authors analysis, the advantages of this finite difference are that it is an O(h{sup 2})-order accurate numerical scheme for both the solution and derivatives on equilateral triangular grids, the structure of the scheme is perhaps the simplest, and its corresponding multigrid algorithm is easily constructed with an optimal convergence rate. They are interested in relaxation of the equilateral triangular grid condition to certain general triangular grids and the application of this multigrid algorithm as a numerically reasonable preconditioner for the lowest-order Raviart-Thomas mixed triangular finite element method. Numerical test results are presented to demonstrate their analytical results and to investigate the applications of this multigrid algorithm on general triangular grids.

  16. Compiling Utility Requirements For New Nuclear Power Plant Project

    SciTech Connect

    Patrakka, Eero

    2002-07-01

    Teollisuuden Voima Oy (TVO) submitted in November 2000 to the Finnish Government an application for a Decision-in-Principle concerning the construction of a new nuclear power plant in Finland. The actual investment decision can be made first after a positive decision has been made by the Government and the Parliament. Parallel to the licensing process, technical preparedness has been upheld so that the procurement process can be commenced without delay, when needed. This includes the definition of requirements for the plant and preliminary preparation of bid inquiry specifications. The core of the technical requirements corresponds to the specifications presented in the European Utility Requirement (EUR) document, compiled by major European electricity producers. Quite naturally, an amount of modifications to the EUR document are needed that take into account the country- and site-specific conditions as well as the experiences gained in the operation of the existing NPP units. Along with the EUR-related requirements concerning the nuclear island and power generation plant, requirements are specified for scope of supply as well as for a variety of issues related to project implementation. (author)

  17. Cross-Compiler for Modeling Space-Flight Systems

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    Ripples is a computer program that makes it possible to specify arbitrarily complex space-flight systems in an easy-to-learn, high-level programming language and to have the specification automatically translated into LibSim, which is a text-based computing language in which such simulations are implemented. LibSim is a very powerful simulation language, but learning it takes considerable time, and it requires that models of systems and their components be described at a very low level of abstraction. To construct a model in LibSim, it is necessary to go through a time-consuming process that includes modeling each subsystem, including defining its fault-injection states, input and output conditions, and the topology of its connections to other subsystems. Ripples makes it possible to describe the same models at a much higher level of abstraction, thereby enabling the user to build models faster and with fewer errors. Ripples can be executed in a variety of computers and operating systems, and can be supplied in either source code or binary form. It must be run in conjunction with a Lisp compiler.

  18. Design and implementation of a calibrated hyperspectral small-animal imager: Practical and theoretical aspects of system optimization

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas Josiah

    Pre-clinical imaging has been an important development within the bioscience and pharmacology fields. A rapidly growing area within these fields is small animal fluorescence imaging, in which molecularly targeted fluorescent probes are used to non-invasively image internal events on a gross anatomical scale. Small-animal fluorescence imaging has transitioned from a research technique to pre-clinical technology very quickly, due to its molecular specificity, low cost, and relative ease of use. In addition, its potential uses in gene therapy and as a translational technology are becoming evident. This thesis outlines the development of an alternative modality for small animal/tissue imaging, using hyperspectral techniques to enable the collection of fluorescence images at different excitation and emission wavelengths. In specific, acousto-optical tunable filters (AOTFs) were used to construct emission-wavelength-scanning and excitation-wavelength-scanning small animal fluorescence imagers. Statistical, classification, and unmixing algorithms have been employed to extract specific fluorescent-dye information from hyperspectral image sets. In this work, we have designed and implemented hyperspectral imaging and analysis techniques to remove background autofluorescence from the desired fluorescence signal, resulting in highly specific and localized fluorescence. Therefore, in practice, it is possible to more accurately pin-point the location and size of diagnostic anatomical markers (e.g. tumors) labeled with fluorescent probes. Furthermore, multiple probes can be individually distinguished. In addition to imaging hardware and acquisition and analysis software, we have designed an optical tissue phantom for quality control and inter-system comparison. The phantom has been modeled using Monte Carlo techniques. The culmination of this work results in an understanding of the advantages and complexities in applying hyperspectral techniques to small animal fluorescence

  19. Strategies for Optimal Implementation of Simulated Clients for Measuring Quality of Care in Low- and Middle-Income Countries.

    PubMed

    Fitzpatrick, Anne; Tumlinson, Katherine

    2017-01-26

    The use of simulated clients or "mystery clients" is a data collection approach in which a study team member presents at a health care facility or outlet pretending to be a real customer, patient, or client. Following the visit, the shopper records her observations. The use of mystery clients can overcome challenges of obtaining accurate measures of health care quality and improve the validity of quality assessments, particularly in low- and middle-income countries. However, mystery client studies should be carefully designed and monitored to avoid problems inherent to this data collection approach. In this article, we discuss our experiences with the mystery client methodology in studies conducted in public- and private-sector health facilities in Kenya and in private-sector facilities in Uganda. We identify both the benefits and the challenges in using this methodology to guide other researchers interested in using this technique. Recruitment of appropriate mystery clients who accurately represent the facility's clientele, have strong recall of recent events, and are comfortable in their role as undercover data collectors are key to successful implementation of this methodology. Additionally, developing detailed training protocols can help ensure mystery clients behave identically and mimic real patrons accurately while short checklists can help ensure mystery client responses are standardized. Strict confidentiality and protocols to avoid unnecessary exams or procedures should also be stressed during training and monitored carefully throughout the study. Despite these challenges, researchers should consider mystery client designs to measure actual provider behavior and to supplement self-reported provider behavior. Data from mystery client studies can provide critical insight into the quality of service provision unavailable from other data collection methods. The unique information available from the mystery client approach far outweighs the cost.

  20. Optimization of exopolysaccharide production by Tremella mesenterica NRRL Y-6158 through implementation of fed-batch fermentation.

    PubMed

    De Baets, S; Du Laing, S; François, C; Vandamme, E J

    2002-10-01

    In liquid culture conditions, the yeast-like fungus Tremella mesenterica occurs in the yeast state and synthesizes an exopolysaccharide (EPS) capsule, which is eventually released into the culture fluid. It is composed of an alpha-1,3-D-mannan backbone, to which beta-1,2 side chains are attached, consisting of D-xylose and D-glucuronic acid. Potato dextrose broth (PDB) seemed to be an excellent medium for both growth of the yeast cells and synthesis of the EPS. This medium is composed solely of an extract of potatoes to which glucose was added. Yet an important disadvantage of this production medium is the presence of starch in the potato extract, since Tremella cells are not capable of metabolizing this component; furthermore, it coprecipitates upon isolation of the polymer [3]. In this respect, it was essential to remove the starch in order to achieve high polysaccharide production and recovery. A good method was the removal of starch through ultrafiltration of the PDB medium before inoculation of the strain. This resulted in an excellent starch-free medium in which other components essential for polysaccharide production were still present [3]. Through implementation of single and cyclic fed-batch fermentations with glucose feed, 1.6- and 2.2-fold increases in EPS yield were obtained, respectively. Lowering the carbon source level by using a cyclic fed-batch technique might decrease the osmotic effect of glucose or any catabolite regulation possibly exerted by this sugar on enzymes involved in EPS synthesis.

  1. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  2. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  3. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  4. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  5. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  6. 7 CFR 1.21 - Compilation of new records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Compilation of new records. 1.21 Section 1.21 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.21 Compilation of new records. Nothing in 5 U.S.C. 552 or this subpart requires that any agency create a...

  7. 7 CFR 1.21 - Compilation of new records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Compilation of new records. 1.21 Section 1.21 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.21 Compilation of new records. Nothing in 5 U.S.C. 552 or this subpart requires that any agency create a...

  8. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Senate and the Clerk of the House of Representatives a report containing a compilation of the information... 30 days after receipt of the report by the Secretary and the Clerk. (c) Information that involves... information shall not be available for public inspection. (e) The first semi-annual compilation shall...

  9. 38 CFR 45.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) NEW RESTRICTIONS ON LOBBYING Agency Reports § 45.600 Semi-annual compilation. (a) The head of... Representatives a report containing a compilation of the information contained in the disclosure reports received... report by the Secretary and the Clerk. (c) Information that involves intelligence matters shall...

  10. 40 CFR 34.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... NEW RESTRICTIONS ON LOBBYING Agency Reports § 34.600 Semi-annual compilation. (a) The head of each... report containing a compilation of the information contained in the disclosure reports received during... the Secretary and the Clerk. (c) Information that involves intelligence matters shall be reported...

  11. 12 CFR 411.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Reports § 411.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the... information contained in the disclosure reports received during the six-month period ending on March 31 or... public inspection 30 days after receipt of the report by the Secretary and the Clerk. (c)...

  12. 22 CFR 138.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Senate and the Clerk of the House of Representatives a report containing a compilation of the information... 30 days after receipt of the report by the Secretary and the Clerk. (c) Information that involves... information shall not be available for public inspection. (e) The first semi-annual compilation shall...

  13. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each... report containing a compilation of the information contained in the disclosure reports received during... the Secretary and the Clerk. (c) Information that involves intelligence matters shall be reported...

  14. 45 CFR 93.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... LOBBYING Agency Reports § 93.600 Semi-annual compilation. (a) The head of each agency shall collect and... compilation of the information contained in the disclosure reports received during the six-month period ending... Clerk. (c) Information that involves intelligence matters shall be reported only to the Select...

  15. 15 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agency Reports § 28.600 Semi-annual compilation. (a) The head of each agency shall collect and compile... the information contained in the disclosure reports received during the six-month period ending on.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  16. Texture compilation for example-based synthesis

    NASA Astrophysics Data System (ADS)

    J, Amjath Ali; J, Janet

    2011-10-01

    In this paper, a new exemplar-based framework is presented, which treats image completion, texture synthesis and texture Analysis in a unified manner. In order to be able to avoid the occurrence of visually inconsistent results, we pose all of the image-editing tasks in the form of a discrete global optimization problem. The objective function of this problem is always well-defined, and corresponds to the energy of a discrete Markov Random Field (MRF). For efficiently optimizing this MRF, a novel optimization scheme, called Priority-BP, is then proposed, which carries two very important extensions over the standard Belief Propagation (BP) algorithm: "priority-based message scheduling" and "dynamic label pruning". These two extensions work in cooperation to deal with the intolerable computational cost of BP, which is caused by the huge number of labels associated with our MRF. In an Experimental results on a wide variety of input images are presented, which demonstrate the effectiveness of our image-completion framework for tasks such as object removal, texture synthesis, text removal and texture Analysis.

  17. Automatic controls and regulators: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Devices, methods, and techniques for control and regulation of the mechanical/physical functions involved in implementing the space program are discussed. Section one deals with automatic controls considered to be, essentially, start-stop operations or those holding the activity in a desired constraint. Devices that may be used to regulate activities within desired ranges or subject them to predetermined changes are dealt with in section two.

  18. Applying Loop Optimizations to Object-oriented Abstractions Through General Classification of Array Semantics

    SciTech Connect

    Yi, Q; Quinlan, D

    2004-03-05

    Optimizing compilers have a long history of applying loop transformations to C and Fortran scientific applications. However, such optimizations are rare in compilers for object-oriented languages such as C++ or Java, where loops operating on user-defined types are left unoptimized due to their unknown semantics. Our goal is to reduce the performance penalty of using high-level object-oriented abstractions. We propose an approach that allows the explicit communication between programmers and compilers. We have extended the traditional Fortran loop optimizations with an open interface. Through this interface, we have developed techniques to automatically recognize and optimize user-defined array abstractions. In addition, we have developed an adapted constant-propagation algorithm to automatically propagate properties of abstractions. We have implemented these techniques in a C++ source-to-source translator and have applied them to optimize several kernels written using an array-class library. Our experimental results show that using our approach, applications using high-level abstractions can achieve comparable, and in cases superior, performance to that achieved by efficient low-level hand-written codes.

  19. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  20. ROSE: The Design of a General Tool for the Independent Optimization of Object-Oriented Frameworks

    SciTech Connect

    Davis, K.; Philip, B.; Quinlan, D.

    1999-05-18

    ROSE represents a programmable preprocessor for the highly aggressive optimization of C++ object-oriented frameworks. A fundamental feature of ROSE is that it preserves the semantics, the implicit meaning, of the object-oriented framework's abstractions throughout the optimization process, permitting the framework's abstractions to be recognized and optimizations to capitalize upon the added value of the framework's true meaning. In contrast, a C++ compiler only sees the semantics of the C++ language and thus is severely limited in what optimizations it can introduce. The use of the semantics of the framework's abstractions avoids program analysis that would be incapable of recapturing the framework's full semantics from those of the C++ language implementation of the application or framework. Just as no level of program analysis within the C++ compiler would not be expected to recognize the use of adaptive mesh refinement and introduce optimizations based upon such information. Since ROSE is programmable, additional specialized program analysis is possible which then compliments the semantics of the framework's abstractions. Enabling an optimization mechanism to use the high level semantics of the framework's abstractions together with a programmable level of program analysis (e.g. dependence analysis), at the level of the framework's abstractions, allows for the design of high performance object-oriented frameworks with uniquely tailored sophisticated optimizations far beyond the limits of contemporary serial F0RTRAN 77, C or C++ language compiler technology. In short, faster, more highly aggressive optimizations are possible. The resulting optimizations are literally driven by the framework's definition of its abstractions. Since the abstractions within a framework are of third party design the optimizations are similarly of third party design, specifically independent of the compiler and the applications that use the framework. The interface to ROSE is

  1. NACRE: A European Compilation of Reaction Rates for Astrophysics

    SciTech Connect

    Carmen Angulo

    1999-12-31

    We report on the program and results of the NACRE network (Nuclear Astrophysics Compilation of Reaction rates). We have compiled low-energy cross section data for 86 charged-particle induced reactions involving light (1 {<=} Z {<=} 14) nuclei. The corresponding Maxwellian-averaged thermonuclear reactions rates are calculated in the temperature range from 10{sup 6} K to 10{sup 10} K. The web site, http://pntpm.ulb.ac.be/nacre.htm, including the cross section data base and the reaction rates, allows users to browse electronically all the information on the reactions studied in this compilation.

  2. NACRE: A European Compilation of Reaction rates for Astrophysics

    SciTech Connect

    Angulo, Carmen

    1999-11-16

    We report on the program and results of the NACRE network (Nuclear Astrophysics Compilation of REaction rates). We have compiled low-energy cross section data for 86 charged-particle induced reactions involving light (1{<=}Z{<=}14) nuclei. The corresponding Maxwellian-averaged thermonuclear reactions rates are calculated in the temperature range from 10{sup 6} K to 10{sup 10} K. The web site http://pntpm.ulb.ac.be/nacre.htm, including the cross section data base and the reaction rates, allows users to browse electronically all the information on the reactions studied in this compilation.

  3. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional requirements to be met by the HAL/S-FC compiler, and the hardware and software compatibilities between the compiler system and the environment in which it operates are defined. Associated runtime facilities and the interface with the Software Development Laboratory are specified. The construction of the HAL/S-FC system as functionally separate units and the interfaces between those units is described. An overview of the system's capabilities is presented and the hardware/operating system requirements are specified. The computer-dependent aspects of the HAL/S-FC are also specified. Compiler directives are included.

  4. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  5. An implementation of SISAL for distributed-memory architectures

    SciTech Connect

    Beard, Patrick C.

    1995-06-01

    This thesis describes a new implementation of the implicitly parallel functional programming language SISAL, for massively parallel processor supercomputers. The Optimizing SISAL Compiler (OSC), developed at Lawrence Livermore National Laboratory, was originally designed for shared-memory multiprocessor machines and has been adapted to distributed-memory architectures. OSC has been relatively portable between shared-memory architectures, because they are architecturally similar, and OSC generates portable C code. However, distributed-memory architectures are not standardized -- each has a different programming model. Distributed-memory SISAL depends on a layer of software that provides a portable, distributed, shared-memory abstraction. This layer is provided by Split-C, a dialect of the C programming language developed at U.C. Berkeley, which has demonstrated good performance on distributed-memory architectures. Split-C provides important capabilities for good performance: support for program-specific distributed data structures, and split-phase memory operations. Distributed data structures help achieve good memory locality, while split-phase memory operations help tolerate the longer communication latencies inherent in distributed-memory architectures. The distributed-memory SISAL compiler and run-time system takes advantage of these capabilities. The results of these efforts is a compiler that runs identically on the Thinking Machines Connection Machine (CM-5), and the Meiko Computing Surface (CS-2).

  6. Analytic Energy Gradients and Spin Multiplicities for Orbital-Optimized Second-Order Perturbation Theory with Density-Fitting Approximation: An Efficient Implementation.

    PubMed

    Bozkaya, Uğur

    2014-10-14

    An efficient implementation of analytic energy gradients and spin multiplicities for the density-fitted orbital-optimized second-order perturbation theory (DF-OMP2) [Bozkaya, U. J. Chem. Theory Comput. 2014, 10, 2371-2378] is presented. The DF-OMP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the cost of single point analytic gradient computations with the orbital-optimized MP2 with the resolution of the identity approach (OO-RI-MP2) [Neese, F.; Schwabe, T.; Kossmann, S.; Schirmer, B.; Grimme, S. J. Chem. Theory Comput. 2009, 5, 3060-3073]. Our results demonstrate that the DF-OMP2 method provides substantially lower computational costs for analytic gradients than OO-RI-MP2. On average, the cost of DF-OMP2 analytic gradients is 9-11 times lower than that of OO-RI-MP2 for systems considered. We also consider aromatic bond dissociation energies, for which MP2 provides poor reaction energies. The DF-OMP2 method exhibits a substantially better performance than MP2, providing a mean absolute error of 2.5 kcal mol(-1), which is more than 9 times lower than that of MP2 (22.6 kcal mol(-1)). Overall, the DF-OMP2 method appears very helpful for electronically challenging chemical systems such as free radicals or other cases where standard MP2 proves unreliable. For such problematic systems, we recommend using DF-OMP2 instead of the canonical MP2 as a more robust method with the same computational scaling.

  7. Materials: A compilation. [considering metallurgy, polymers, insulation, and coatings

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical information is provided for the properties and fabrication of metals and alloys, as well as for polymeric materials, such as lubricants, coatings, and insulation. Available patent information is included in the compilation.

  8. Rehosting and retargeting an Ada compiler: A design study

    NASA Technical Reports Server (NTRS)

    Robinson, Ray

    1986-01-01

    The goal of this study was to develop a plan for rehosting and retargeting the Air Force Armaments Lab. Ada cross compiler. This compiler was validated in Sept. 1985 using ACVC 1.6, is written in Pascal, is hosted on a CDC Cyber 170, and is targeted to an embedded Zilog Z8002. The study was performed to determine the feasibility, cost, time, and tasks required to retarget the compiler to a DEC VAX 11/78x and rehost it to an embedded U.S. Navy AN/UYK-44 computer. Major tasks identified were rehosting the compiler front end, rewriting the back end (code generator), translating the run time environment from Z8002 assembly language to AN/UYK-44 assembly language, and developing a library manager.

  9. Implementing the Global Plan to Stop TB, 2011–2015 – Optimizing Allocations and the Global Fund’s Contribution: A Scenario Projections Study

    PubMed Central

    Korenromp, Eline L.; Glaziou, Philippe; Fitzpatrick, Christopher; Floyd, Katherine; Hosseini, Mehran; Raviglione, Mario; Atun, Rifat; Williams, Brian

    2012-01-01

    control programs to implement a more optimal investment approach focusing on highest-impact populations and interventions. PMID:22719954

  10. galkin: A new compilation of Milky Way rotation curve data

    NASA Astrophysics Data System (ADS)

    Pato, Miguel; Iocco, Fabio

    We present galkin, a novel compilation of kinematic measurements tracing the rotation curve of our Galaxy together with a tool to treat the data. The compilation is optimised to Galactocentric radii between 3 and 20 kpc and includes the kinematics of gas, stars and masers in a total of 2780 measurements carefully collected from almost four decades of literature. A simple, user-friendly tool is provided to select, treat and retrieve the full database.

  11. Nimble Compiler Environment for Agile Hardware. Volume 1

    DTIC Science & Technology

    2001-10-01

    programs to be compiled directly into custom silicon or reconfigurable architectures. Some other novel hardware synthesis systems compile Java 24, Matlab ...appreciable acceleration in some compute intensive applications. Such systems have been very difficult to program though and thus have not been explo ited...for their benefits. The problem is the lack of an appropriate design environment for system engineers like those typically found in digital signal

  12. An experimental APL compiler for a distributed memory parallel machine

    SciTech Connect

    Ching, W.M.; Katz, A.

    1994-12-31

    The authors developed an experimental APL compiler for the IBM SP1 distributed memory parallel machine. It accepts classical APL programs, without additional directives, and generates parallelized C code for execution on the SP1 machine. The compiler exploits data parallelism in APL programs based on parallel high level primitives. Program variables are either replicated or partitioned. They also present performance data for five moderate size programs running on the SP1.

  13. Compiler writing system detail design specification. Volume 2: Component specification

    NASA Technical Reports Server (NTRS)

    Arthur, W. J.

    1974-01-01

    The logic modules and data structures composing the Meta-translator module are desribed. This module is responsible for the actual generation of the executable language compiler as a function of the input Meta-language. Machine definitions are also processed and are placed as encoded data on the compiler library data file. The transformation of intermediate language in target language object text is described.

  14. On search guide phrase compilation for recommending home medical products.

    PubMed

    Luo, Gang

    2010-01-01

    To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.

  15. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  16. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  17. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  18. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    PubMed

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  19. A compilation of global bio-optical in situ data for ocean-colour satellite applications

    NASA Astrophysics Data System (ADS)

    Valente, André; Sathyendranath, Shubha; Brotas, Vanda; Groom, Steve; Grant, Michael; Taberner, Malcolm; Antoine, David; Arnone, Robert; Balch, William M.; Barker, Kathryn; Barlow, Ray; Bélanger, Simon; Berthon, Jean-François; Beşiktepe, Şükrü; Brando, Vittorio; Canuti, Elisabetta; Chavez, Francisco; Claustre, Hervé; Crout, Richard; Frouin, Robert; García-Soto, Carlos; Gibb, Stuart W.; Gould, Richard; Hooker, Stanford; Kahru, Mati; Klein, Holger; Kratzer, Susanne; Loisel, Hubert; McKee, David; Mitchell, Brian G.; Moisan, Tiffany; Muller-Karger, Frank; O'Dowd, Leonie; Ondrusek, Michael; Poulton, Alex J.; Repecaud, Michel; Smyth, Timothy; Sosik, Heidi M.; Twardowski, Michael; Voss, Kenneth; Werdell, Jeremy; Wernand, Marcel; Zibordi, Giuseppe

    2016-06-01

    A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GeP&CO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately. The compiled data are available at doi:10.1594/PANGAEA.854832 (Valente et al., 2015).

  20. Compilation of a standardised international folate database for EPIC.

    PubMed

    Nicolas, Geneviève; Witthöft, Cornelia M; Vignat, Jérôme; Knaze, Viktoria; Huybrechts, Inge; Roe, Mark; Finglas, Paul; Slimani, Nadia

    2016-02-15

    This paper describes the methodology applied for compiling an "international end-user" folate database. This work benefits from the unique dataset offered by the European Prospective Investigation into Cancer and Nutrition (EPIC) (N=520,000 subjects in 23 centres). Compilation was done in four steps: (1) identify folate-free foods then find folate values for (2) folate-rich foods common across EPIC countries, (3) the remaining "common" foods, and (4) "country-specific" foods. Compiled folate values were concurrently standardised in terms of unit, mode of expression and chemical analysis, using information in national food composition tables (FCT). 43-70% total folate values were documented as measured by microbiological assay. Foods reported in EPIC were either matched directly to FCT foods, treated as recipes or weighted averages. This work has produced the first standardised folate dataset in Europe, which was used to calculate folate intakes in EPIC; a prerequisite to study the relation between folate intake and diseases.

  1. Compilation of “Subject Term List in Chinese Words”

    NASA Astrophysics Data System (ADS)

    Gao, Chong Qian; Li, Guo Hua

    “Subject Term List in Chinese words” had been compiled by about 1,300 specialists who belonged to more than 500 sectors of Research Institute of Scientific and Technological Information, Peking Library and other institutes, and was published in Oct., 1979. It is the largest size and comprehensive thesaurus in China including about 100,000 subject terms (associate descriptors also) and covering almost all areas in social science and science & technology fields. The structure, compilation rule and the application of the List are described in detail.

  2. Compilation and analysis of Escherichia coli promoter DNA sequences.

    PubMed Central

    Hawley, D K; McClure, W R

    1983-01-01

    The DNA sequence of 168 promoter regions (-50 to +10) for Escherichia coli RNA polymerase were compiled. The complete listing was divided into two groups depending upon whether or not the promoter had been defined by genetic (promoter mutations) or biochemical (5' end determination) criteria. A consensus promoter sequence based on homologies among 112 well-defined promoters was determined that was in substantial agreement with previous compilations. In addition, we have tabulated 98 promoter mutations. Nearly all of the altered base pairs in the mutants conform to the following general rule: down-mutations decrease homology and up-mutations increase homology to the consensus sequence. PMID:6344016

  3. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  4. Automated Vulnerability Detection for Compiled Smart Grid Software

    SciTech Connect

    Prowell, Stacy J; Pleszkoch, Mark G; Sayre, Kirk D; Linger, Richard C

    2012-01-01

    While testing performed with proper experimental controls can provide scientifically quantifiable evidence that software does not contain unintentional vulnerabilities (bugs), it is insufficient to show that intentional vulnerabilities exist, and impractical to certify devices for the expected long lifetimes of use. For both of these needs, rigorous analysis of the software itself is essential. Automated software behavior computation applies rigorous static software analysis methods based on function extraction (FX) to compiled software to detect vulnerabilities, intentional or unintentional, and to verify critical functionality. This analysis is based on the compiled firmware, takes into account machine precision, and does not rely on heuristics or approximations early in the analysis.

  5. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  6. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  7. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  8. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  9. Compilation of historical information of 300 Area facilities and activities

    SciTech Connect

    Gerber, M.S.

    1992-12-01

    This document is a compilation of historical information of the 300 Area activities and facilities since the beginning. The 300 Area is shown as it looked in 1945, and also a more recent (1985) look at the 300 Area is provided.

  10. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  11. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  12. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  13. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  14. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  15. 22 CFR 712.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Semi-annual compilation. 712.600 Section 712.600 Foreign Relations OVERSEAS PRIVATE INVESTMENT CORPORATION ADMINISTRATIVE PROVISIONS NEW RESTRICTIONS ON... the Committee on Foreign Relations of the Senate and the Committee on Foreign Affairs of the House...

  16. 22 CFR 712.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Semi-annual compilation. 712.600 Section 712.600 Foreign Relations OVERSEAS PRIVATE INVESTMENT CORPORATION ADMINISTRATIVE PROVISIONS NEW RESTRICTIONS ON... the Committee on Foreign Relations of the Senate and the Committee on Foreign Affairs of the House...

  17. 5 CFR 9701.524 - Compilation and publication of data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....524 Section 9701.524 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES... SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Labor-Management Relations § 9701.524 Compilation and... to inspection and reproduction in accordance with 5 U.S.C. 552 and 552a. The HSLRB will...

  18. Valves and other mechanical components and equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The articles in this Compilation will be of interest to mechanical engineers, users and designers of machinery, and to those engineers and manufacturers specializing in fluid handling systems. Section 1 describes a number of valves and valve systems. Section 2 contains articles on machinery and mechanical devices that may have applications in a number of different areas.

  19. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Generation and Review of Documents Containing Restricted Data and Formerly Restricted Data §...

  20. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Generation and Review of Documents Containing Restricted Data and Formerly Restricted Data §...

  1. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Generation and Review of Documents Containing Restricted Data and Formerly Restricted Data §...

  2. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Generation and Review of Documents Containing Restricted Data and Formerly Restricted Data §...

  3. Electronic circuits: A compilation. [for electronic equipment in telecommunication

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A compilation containing articles on newly developed electronic circuits and systems is presented. It is divided into two sections: (1) section 1 on circuits and techniques of particular interest in communications technology, and (2) section 2 on circuits designed for a variety of specific applications. The latest patent information available is also given. Circuit diagrams are shown.

  4. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND DECLASSIFICATION Generation and Review of Documents Containing Restricted Data and Formerly Restricted Data §...

  5. Effective Compiler Error Message Enhancement for Novice Programming Students

    ERIC Educational Resources Information Center

    Becker, Brett A.; Glanville, Graham; Iwashima, Ricardo; McDonnell, Claire; Goslin, Kyle; Mooney, Catherine

    2016-01-01

    Programming is an essential skill that many computing students are expected to master. However, programming can be difficult to learn. Successfully interpreting compiler error messages (CEMs) is crucial for correcting errors and progressing toward success in programming. Yet these messages are often difficult to understand and pose a barrier to…

  6. Solubility data are compiled for metals in liquid zinc

    NASA Technical Reports Server (NTRS)

    Dillon, I. G.; Johnson, I.

    1967-01-01

    Available data is compiled on the solubilities of various metals in liquid zinc. The temperature dependence of the solubility data is expressed using the empirical straight line relationship existing between the logarithm of the solubility and the reciprocal of the absolute temperature.

  7. Investigating the Scope of an Advance Organizer for Compiler Concepts.

    ERIC Educational Resources Information Center

    Levine, Lawrence H.; Loerinc, Beatrice M.

    1985-01-01

    Investigates effectiveness of advance organizers for teaching functioning and use of compilers to undergraduate students in computer science courses. Two experimental groups used the advance organizer while two control groups did not. Findings indicate that an explicitly concept-directed organizer is effective in providing a framework for…

  8. Compilation of detection sensitivities in thermal-neutron activation

    NASA Technical Reports Server (NTRS)

    Wahlgren, M. A.; Wing, J.

    1967-01-01

    Detection sensitivities of the chemical elements following thermal-neutron activation have been compiled from the available experimental cross sections and nuclear properties and presented in a concise and usable form. The report also includes the equations and nuclear parameters used in the calculations.

  9. 45 CFR 604.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false Semi-annual compilation. 604.600 Section 604.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION NEW..., shall be available for public inspection 30 days after receipt of the report by the Secretary and...

  10. 45 CFR 604.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 3 2014-10-01 2014-10-01 false Semi-annual compilation. 604.600 Section 604.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION NEW..., shall be available for public inspection 30 days after receipt of the report by the Secretary and...

  11. A compilation of chase work characterizes this image, looking south, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A compilation of chase work characterizes this image, looking south, in the niche which slightly separates E Building form R Building, on the north side - Department of Energy, Mound Facility, Electronics Laboratory Building (E Building), One Mound Road, Miamisburg, Montgomery County, OH

  12. Bitwise identical compiling setup: prospective for reproducibility and reliability of Earth system modeling

    NASA Astrophysics Data System (ADS)

    Li, R.; Liu, L.; Yang, G.; Zhang, C.; Wang, B.

    2016-02-01

    Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is an essential technical support for Earth system modeling. With the fast development of computer software and hardware, a compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change in compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of code may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical) results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  13. Bitwise identical compiling setup: prospective for reproducibility and reliability of earth system modeling

    NASA Astrophysics Data System (ADS)

    Li, R.; Liu, L.; Yang, G.; Zhang, C.; Wang, B.

    2015-11-01

    Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is essential technical supports for Earth system modeling. With the fast development of computer software and hardware, compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change of compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of codes may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical) results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs or risks in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  14. Compilation of VS30 Data for the United States

    USGS Publications Warehouse

    Yong, Alan; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Odum, Jack K.; Stephenson, William J.; Haefner, Scott

    2016-01-01

    VS30, the time-averaged shear-wave velocity (VS) to a depth of 30 meters, is a key index adopted by the earthquake engineering community to account for seismic site conditions. VS30 is typically based on geophysical measurements of VS derived from invasive and noninvasive techniques at sites of interest. Owing to cost considerations, as well as logistical and environmental concerns, VS30 data are sparse or not readily available for most areas. Where data are available, VS30 values are often assembled in assorted formats that are accessible from disparate and (or) impermanent Web sites. To help remedy this situation, we compiled VS30 measurements obtained by studies funded by the U.S. Geological Survey (USGS) and other governmental agencies. Thus far, we have compiled VS30 values for 2,997 sites in the United States, along with metadata for each measurement from government-sponsored reports, Web sites, and scientific and engineering journals. Most of the data in our VS30 compilation originated from publications directly reporting the work of field investigators. A small subset (less than 20 percent) of VS30 values was previously compiled by the USGS and other research institutions. Whenever possible, VS30 originating from these earlier compilations were crosschecked against published reports. Both downhole and surface-based VS30 estimates are represented in our VS30 compilation. Most of the VS30 data are for sites in the western contiguous United States (2,141 sites), whereas 786 VS30 values are for sites in the Central and Eastern United States; 70 values are for sites in other parts of the United States, including Alaska (15 sites), Hawaii (30 sites), and Puerto Rico (25 sites). An interactive map is hosted on the primary USGS Web site for accessing VS30 data (http://earthquake.usgs.gov/research/vs30/).

  15. Compilation and development of K-6 aerospace materials for implementation in NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1987-01-01

    Spacelink is an electronic information service to be operated by the Marshall Space Flight Center. It will provide NASA news and educational resources including software programs that can be accessed by anyone with a computer and modem. Spacelink is currently being installed and will soon begin service. It will provide daily updates of NASA programs, information about NASA educational services, manned space flight, unmanned space flight, aeronautics, NASA itself, lesson plans and activities, and space program spinoffs. Lesson plans and activities were extracted from existing NASA publications on aerospace activities for the elementary school. These materials were arranged into 206 documents which have been entered into the Spacelink program for use in grades K-6.

  16. Skills Conversion Project: Chapter 20, Compilation of Recommendations and Summaries of Implementation Programs. Final Report.

    ERIC Educational Resources Information Center

    National Society of Professional Engineers, Washington, DC.

    A study was conducted for the U.S. Department of Labor by the National Society of Professional Engineers to investigate the potential for and means of conversion of the skills of displaced aerospace and defense technical professionals to other industries or to public service. The study concentrated on areas where new employment opportunities might…

  17. GPU accelerated implementation of NCI calculations using promolecular density.

    PubMed

    Rubez, Gaëtan; Etancelin, Jean-Matthieu; Vigouroux, Xavier; Krajecki, Michael; Boisson, Jean-Charles; Hénon, Eric

    2017-03-25

    The NCI approach is a modern tool to reveal chemical noncovalent interactions. It is particularly attractive to describe ligand-protein binding. A custom implementation for NCI using promolecular density is presented. It is designed to leverage the computational power of NVIDIA graphics processing unit (GPU) accelerators through the CUDA programming model. The code performances of three versions are examined on a test set of 144 systems. NCI calculations are particularly well suited to the GPU architecture, which reduces drastically the computational time. On a single compute node, the dual-GPU version leads to a 39-fold improvement for the biggest instance compared to the optimal OpenMP parallel run (C code, icc compiler) with 16 CPU cores. Energy consumption measurements carried out on both CPU and GPU NCI tests show that the GPU approach provides substantial energy savings. © 2017 Wiley Periodicals, Inc.

  18. Compilation of gallium resource data for bauxite deposits

    USGS Publications Warehouse

    Schulte, Ruth F.; Foley, Nora K.

    2014-01-01

    Gallium (Ga) concentrations for bauxite deposits worldwide have been compiled from the literature to provide a basis for research regarding the occurrence and distribution of Ga worldwide, as well as between types of bauxite deposits. In addition, this report is an attempt to bring together reported Ga concentration data into one database to supplement ongoing U.S. Geological Survey studies of critical mineral resources. The compilation of Ga data consists of location, deposit size, bauxite type and host rock, development status, major oxide data, trace element (Ga) data and analytical method(s) used to derive the data, and tonnage values for deposits within bauxite provinces and districts worldwide. The range in Ga concentrations for bauxite deposits worldwide is

  19. Compiler analysis for irregular problems in FORTRAN D

    NASA Technical Reports Server (NTRS)

    Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel

    1992-01-01

    We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.

  20. The NASA earth resources spectral information system: A data compilation

    NASA Technical Reports Server (NTRS)

    Leeman, V.; Earing, D.; Vincent, R. K.; Ladd, S.

    1971-01-01

    The NASA Earth Resources Spectral Information System and the information contained therein are described. It contains an ordered, indexed compilation of natural targets in the optical region from 0.3 to 45.0 microns. The data compilation includes approximately 100 rock and mineral, 2600 vegetation, 1000 soil, and 60 water spectral reflectance, transmittance, and emittance curves. Most of the data have been categorized by subject, and the curves in those subject areas have been plotted on a single graph. Those categories with too few curves and miscellaneous categories have been plotted as single-curve graphs. Each graph, composite of single, is fully titled to indicate curve source and is indexed by subject to facilitate user retrieval.

  1. A Compilation Strategy for Numerical Programs Based on Partial Evaluation

    DTIC Science & Technology

    1989-07-01

    advance. For example, since Pluto is very small relative to the other planets, its mass was approximated as zero in the compile time data-structures...Laboratory. (Sussman] G.J. Sussman and J. Wisdom, "Numerical evidence that the motion of Pluto is chaotic". In Science, Volume 241, 22 July 1988. [WU 87...30.15522934 1.657000860 1.437858110) (3-vector -.009619598984 -. 1150657040 -.04688875226))) (define pluto (make-rectangular-heliocentric ’ pluto 0 (3-vector

  2. The Definition of Production Quality Ada(trade name) Compiler.

    DTIC Science & Technology

    1987-03-20

    UNIT ELEMENT NO. INO. INO. ACCESSION NO. 11. TITLE (include Security Clasification ) The Definition of a Production Quality Ada Compiler 12. PERSONAL...DIVISION AIR FORCE SYSTEMS COMMAND Los Angeles Air Force Station P.O. Box 92960, Worldway Postal Center Los Angeles, CA 90009-2960 APPROVED FOR PUBLIC ...Department, Engineering Group. Mr. Giovanni Bargero, SD/ALR, approved the report for the Air Force. This report has been reviewed by the Public Affairs

  3. Automatic Determination of Recommended Test Combinations for Ada Compilers

    DTIC Science & Technology

    1990-12-01

    York University PAT - Program Analyzer Tool PPG - Pascal Program Generator RADC - Rome Air Development Center RHS - Right Hand Sides SEMANOL - A formal...random" behavior (25). 2-14 2.3.2.8 Automatic Generation of Executable Programs to test a Pascal Com- piler. This article by Dr. C. J. Burgess (11...The programs still had to be individually compiled and tested. This article specifically concen- trates on the generation of test programs for Pascal

  4. Availability of Ada and C++ Compilers, Tools, Education and Training

    DTIC Science & Technology

    1991-07-01

    assembler languages of various sorts, C, and Fortran languages. Several provide an interface to Pascal and one to Cobol. The ability to import and export...program that runs on a different target platform. As an example, a Fortran or Pascal compiler running on a DEC VAX computer may produce output which...Ada, Pascal , Fortran, C++, PLI, and Jovial. The entire source code is not necessarily generated and some tools provide user- customizable templates that

  5. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  6. Selective-pulse-network compilation on a liquid-state nuclear-magnetic-resonance system

    NASA Astrophysics Data System (ADS)

    Li, Jun; Cui, Jiangyu; Laflamme, Raymond; Peng, Xinhua

    2016-09-01

    In creating a large-scale quantum information processor, the ability to construct control pulses for implementing an arbitrary quantum circuit in a scalable manner is an important requirement. For liquid-state nuclear-magnetic-resonance quantum computing, a circuit is generally realized through a sequence of selective soft pulses, in which various control imperfections exist and are to be corrected. In this work, we present a comprehensive analysis of the errors arisen in a selective pulse network by using the zeroth- and first-order average Hamiltonian theory. Effective correction rules are derived for adjusting important pulse parameters such as irradiation frequencies, rotational angles, and transmission phases of the selective pulses to increase the control fidelity. Simulations show that applying our compilation procedure for a given circuit is efficient and can greatly reduce the error accumulation.

  7. The Concert system - Compiler and runtime technology for efficient concurrent object-oriented programming

    NASA Technical Reports Server (NTRS)

    Chien, Andrew A.; Karamcheti, Vijay; Plevyak, John; Sahrawat, Deepak

    1993-01-01

    Concurrent object-oriented languages, particularly fine-grained approaches, reduce the difficulty of large scale concurrent programming by providing modularity through encapsulation while exposing large degrees of concurrency. Despite these programmability advantages, such languages have historically suffered from poor efficiency. This paper describes the Concert project whose goal is to develop portable, efficient implementations of fine-grained concurrent object-oriented languages. Our approach incorporates aggressive program analysis and program transformation with careful information management at every stage from the compiler to the runtime system. The paper discusses the basic elements of the Concert approach along with a description of the potential payoffs. Initial performance results and specific plans for system development are also detailed.

  8. Lean and Efficient Software: Whole-Program Optimization of Executables

    DTIC Science & Technology

    2013-01-03

    programming idioms and compiler optimizations.  CodeSurfer/x86 includes an API for manipulating and rewriting the IR. This capability has been the basis of...2.1) quarterly reports Identify common coding idioms and compiler transformations that result in incorrect disassembly (task 2.1) 1/2012 2/15/2012

  9. The Optimizing Patient Transfers, Impacting Medical Quality, andImproving Symptoms:Transforming Institutional Care approach: preliminary data from the implementation of a Centers for Medicare and Medicaid Services nursing facility demonstration project.

    PubMed

    Unroe, Kathleen T; Nazir, Arif; Holtz, Laura R; Maurer, Helen; Miller, Ellen; Hickman, Susan E; La Mantia, Michael A; Bennett, Merih; Arling, Greg; Sachs, Greg A

    2015-01-01

    The Optimizing Patient Transfers, Impacting Medical Quality, and Improving Symptoms: Transforming Institutional Care (OPTIMISTIC) project aims to reduce avoidable hospitalizations of long-stay residents enrolled in 19 central Indiana nursing facilities. This clinical demonstration project, funded by the Centers for Medicare and Medicaid Services Innovations Center, places a registered nurse in each nursing facility to implement an evidence-based quality improvement program with clinical support from nurse practitioners. A description of the model is presented, and early implementation experiences during the first year of the project are reported. Important elements include better medical care through implementation of Interventions to Reduce Acute Care Transfers tools and chronic care management, enhanced transitional care, and better palliative care with a focus on systematic advance care planning. There were 4,035 long-stay residents in 19 facilities enrolled in OPTIMISTIC between February 2013 and January 2014. Root-cause analyses were performed for all 910 acute transfers of these long stay residents. Of these transfers, the project RN evaluated 29% as avoidable (57% were not avoidable and 15% were missing), and opportunities for quality improvement were identified in 54% of transfers. Lessons learned in early implementation included defining new clinical roles, integrating into nursing facility culture, managing competing facility priorities, communicating with multiple stakeholders, and developing a system for collecting and managing data. The success of the overall initiative will be measured primarily according to reduction in avoidable hospitalizations of long-stay nursing facility residents.

  10. Compilation of a Global GIS Crater Database for the Moon

    NASA Astrophysics Data System (ADS)

    Barlow, Nadine G.; Mest, S. C.; Gibbs, V. B.; Kinser, R. M.

    2012-10-01

    We are using primarily Lunar Reconnaissance Orbiter (LRO) information to compile a new global database of lunar impact craters 5 km in diameter and larger. Each crater’s information includes coordinates of the crater center (ULCN 2005), crater diameter (major and minor diameters if crater is elliptical), azimuthal angle of orientation if crater is elliptical, ejecta and interior morphologies if present, crater preservation state, geologic unit, floor depth, average rim height, central peak height and basal diameter if present, and elevation and elemental/mineralogy data of surroundings. LROC WAC images are used in ArcGIS to obtain crater diameters and central coordinates and LROC WAC and NAC images are used to classify interior and ejecta morphologies. Gridded and individual spot data from LOLA are used to obtain crater depths, rim heights, and central peak height and basal diameter. Crater preservational state is based on crater freshness as determined by the presence/absence of specific interior and ejecta morphologies and elevated crater rim together with the ratio of current crater depth to depth expected for fresh crater of identical size. The crater database currently contains data on over 15,000 craters covering 80% of the nearside and 15% of the farside. We also include information allowing cross-correlation of craters in our database with those in existing crater catalogs, including the ground-based “System of Lunar Craters” by Arthur et al. (1963-1966), the Lunar Orbiter/Apollo-based crater catalog compiled by Andersson and Whitaker (1982), and the Apollo-based morphometric crater database by Pike (1980). We find significant differences in crater diameter and classification between these earlier crater catalogs and our new compilation. Utilizing the capability of GIS to overlay different datasets, we will report on how specific crater features such as central peaks, wall terraces, and impact melt deposits correlate with parameters such as elevation

  11. Recent Efforts in Data Compilations for Nuclear Astrophysics

    SciTech Connect

    Dillmann, Iris

    2008-05-21

    Some recent efforts in compiling data for astrophysical purposes are introduced, which were discussed during a JINA-CARINA Collaboration meeting on 'Nuclear Physics Data Compilation for Nucleosynthesis Modeling' held at the ECT* in Trento/Italy from May 29th-June 3rd, 2007. The main goal of this collaboration is to develop an updated and unified nuclear reaction database for modeling a wide variety of stellar nucleosynthesis scenarios. Presently a large number of different reaction libraries (REACLIB) are used by the astrophysics community. The 'JINA Reaclib Database' on http://www.nscl.msu.edu/{approx}nero/db/ aims to merge and fit the latest experimental stellar cross sections and reaction rate data of various compilations, e.g. NACRE and its extension for Big Bang nucleosynthesis, Caughlan and Fowler, Iliadis et al., and KADoNiS.The KADoNiS (Karlsruhe Astrophysical Database of Nucleosynthesis in Stars, http://nuclear-astrophysics.fzk.de/kadonis) project is an online database for neutron capture cross sections relevant to the s process. The present version v0.2 is already included in a REACLIB file from Basel university (http://download.nucastro.org/astro/reaclib). The present status of experimental stellar (n,{gamma}) cross sections in KADoNiS is shown. It contains recommended cross sections for 355 isotopes between {sup 1}H and {sup 210}Bi, over 80% of them deduced from experimental data.A ''high priority list'' for measurements and evaluations for light charged-particle reactions set up by the JINA-CARINA collaboration is presented. The central web access point to submit and evaluate new data is provided by the Oak Ridge group via the http://www.nucastrodata.org homepage. 'Workflow tools' aim to make the evaluation process transparent and allow users to follow the progress.

  12. The Nippon Foundation / GEBCO Indian Ocean Bathymetric Compilation Project

    NASA Astrophysics Data System (ADS)

    Wigley, R. A.; Hassan, N.; Chowdhury, M. Z.; Ranaweera, R.; Sy, X. L.; Runghen, H.; Arndt, J. E.

    2014-12-01

    The Indian Ocean Bathymetric Compilation (IOBC) project, undertaken by Nippon Foundation / GEBCO Scholars, is focused on building a regional bathymetric data compilation, of all publically-available bathymetric data within the Indian Ocean region from 30°N to 60° S and 10° to 147° E. One of the objectives of this project is the creation of a network of Nippon Foundation / GEBCO Scholars working together, derived from the thirty Scholars from fourteen nations bordering on the Indian Ocean who have graduated from this Postgraduate Certificate in Ocean Bathymetry (PCOB) training program training program at the University of New Hampshire. The IOBC project has provided students a working example during their course work and has been used as basis for student projects during their visits to another Laboratory at the end of their academic year. This multi-national, multi-disciplinary project team will continue to build on the skills gained during the PCOB program through additional training. The IOBC is being built using the methodology developed for the International Bathymetric Chart of the Southern Ocean (IBCSO) compilation (Arndt et al., 2013). This skill was transferred, through training workshops, to further support the ongoing development within the scholar's network. This capacity-building project is envisioned to connect other personnel from within all of the participating nations and organizations, resulting in additional capacity-building in this field of multi-resolution bathymetric grid generation in their home communities. An updated regional bathymetric map and grids of the Indian Ocean will be an invaluable tool for all fields of marine scientific research and resource management. In addition, it has implications for increased public safety by offering the best and most up-to-date depth data for modeling regional-scale oceanographic processes such as tsunami-wave propagation behavior amongst others.

  13. Recent Efforts in Data Compilations for Nuclear Astrophysics

    NASA Astrophysics Data System (ADS)

    Dillmann, Iris

    2008-05-01

    Some recent efforts in compiling data for astrophysical purposes are introduced, which were discussed during a JINA-CARINA Collaboration meeting on ``Nuclear Physics Data Compilation for Nucleosynthesis Modeling'' held at the ECT* in Trento/Italy from May 29th-June 3rd, 2007. The main goal of this collaboration is to develop an updated and unified nuclear reaction database for modeling a wide variety of stellar nucleosynthesis scenarios. Presently a large number of different reaction libraries (REACLIB) are used by the astrophysics community. The ``JINA Reaclib Database'' on http://www.nscl.msu.edu/~nero/db/ aims to merge and fit the latest experimental stellar cross sections and reaction rate data of various compilations, e.g. NACRE and its extension for Big Bang nucleosynthesis, Caughlan and Fowler, Iliadis et al., and KADoNiS. The KADoNiS (Karlsruhe Astrophysical Database of Nucleosynthesis in Stars, http://nuclear-astrophysics.fzk.de/kadonis) project is an online database for neutron capture cross sections relevant to the s process. The present version v0.2 is already included in a REACLIB file from Basel university (http://download.nucastro.org/astro/reaclib). The present status of experimental stellar (n,γ) cross sections in KADoNiS is shown. It contains recommended cross sections for 355 isotopes between 1H and 210Bi, over 80% of them deduced from experimental data. A ``high priority list'' for measurements and evaluations for light charged-particle reactions set up by the JINA-CARINA collaboration is presented. The central web access point to submit and evaluate new data is provided by the Oak Ridge group via the http://www.nucastrodata.org homepage. ``Workflow tools'' aim to make the evaluation process transparent and allow users to follow the progress.

  14. AFGL atmospheric absorption line parameters compilation - 1982 edition

    NASA Astrophysics Data System (ADS)

    Rothman, L. S.; Gamache, R. R.; Barbe, A.; Goldman, A.; Gillis, J. R.; Brown, L. R.; Toth, R. A.; Flaud, J.-M.; Camy-Peyret, C.

    1983-08-01

    The latest edition of the AFGL atmospheric absorption line parameters compilation for the seven most active infrared terrestrial absorbers is described. Major modifications to the atlas for this edition include updating of water-vapor parameters from 0 to 4300 per cm, improvements to line positions for carbon dioxide, substantial modifications to the ozone bands in the middle to far infrared, and improvements to the 7- and 2.3-micron bands of methane. The atlas now contains about 181,000 rotation and vibration-rotation transitions between 0 and 17,900 per cm. The sources of the absorption parameters are summarized.

  15. AFGL atmospheric absorption line parameters compilation - 1980 version

    NASA Astrophysics Data System (ADS)

    Rothman, L. S.

    1981-03-01

    A new version of the AFGL atmospheric absorption line parameters compilation is now available. Major modifications since the last edition of 1978 include the strongest bands of water vapor, updated line positions for carbon dioxide, improved ozone parameters in the 5- and 10 micron regions, and updated and additional data for methane in the 3.5- and 7.7 micron regions. The atlas now contains over 159,000 rotational and vibration-rotation transitions from 0.3 to 17,880 per cm.

  16. Current trends in seasonal ice storage. [Compilation of projects

    SciTech Connect

    Gorski, A.J.

    1986-05-01

    This document is a compilation of modern research projects focused upon the use of naturally grown winter ice for summer cooling applications. Unlike older methods of ice-based cooling, in which ice was cut from rivers and lakes and transported to insulated icehouses, modern techniques grow ice directly in storage containers - by means of heat pipes, snow machines, and water sprays - at the site of application. This modern adaptation of an old idea was reinvented independently at several laboratories in the United States and Canada. Applications range from air conditioning and food storage to desalinization.

  17. Computer programs: Information retrieval and data analysis, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  18. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  19. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  20. 23 CFR 924.11 - Implementation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... compilation and analysis of safety data for the annual report to the FHWA Division Administrator required under § 924.15(a)(2) on the progress being made to implement the railway-highway grade crossing program... section, shall be accounted for in the statewide transportation improvement program and reported...

  1. 23 CFR 924.11 - Implementation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... compilation and analysis of safety data for the annual report to the FHWA Division Administrator required under § 924.15(a)(2) on the progress being made to implement the railway-highway grade crossing program... section, shall be accounted for in the statewide transportation improvement program and reported...

  2. 23 CFR 924.11 - Implementation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... compilation and analysis of safety data for the annual report to the FHWA Division Administrator required under § 924.15(a)(2) on the progress being made to implement the railway-highway grade crossing program... section, shall be accounted for in the statewide transportation improvement program and reported...

  3. 23 CFR 924.11 - Implementation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... compilation and analysis of safety data for the annual report to the FHWA Division Administrator required under § 924.15(a)(2) on the progress being made to implement the railway-highway grade crossing program... section, shall be accounted for in the statewide transportation improvement program and reported...

  4. 23 CFR 924.11 - Implementation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... compilation and analysis of safety data for the annual report to the FHWA Division Administrator required under § 924.15(a)(2) on the progress being made to implement the railway-highway grade crossing program... section, shall be accounted for in the statewide transportation improvement program and reported...

  5. Investigation of high-order and optimized interpolation methods with implementation in a high-order overset grid fluid dynamics solver

    NASA Astrophysics Data System (ADS)

    Sherer, Scott Eric

    Various high-order and optimized interpolation procedures have been developed for use in a high-order overset grid computational fluid dynamics solver. Because of the high spatial order of accuracy of the solver, second-order accurate trilinear interpolation typically used in low-order overset grid flow solvers is insufficient to maintain overall order of accuracy, and thus high-order interpolation methods must be employed. Candidate interpolation methods, including a generalized Lagrangian method and a method based on the use of B-splines, were formulated. The coefficients for the generalized Lagrangian method may be found strictly from constraints on the formal order of accuracy of the method, in which case the method is non-optimized, or through constraints arising from the minimization of a one-dimensional integrated error, in which case the method is considered optimized. The interpolation methods were investigated using a one-dimensional Fourier error analysis, and their spectral behavior studied. They also were examined in multiple dimensions for the problem of grid-to-grid interpolation of various two- and three-dimensional analytical test functions. The high-order non-optimized explicit Lagrangian method was found to be the most robust and accurate of the interpolation methods considered. The fourth-order B-spline method was very competitive when the interpolation points were located in the middle of the stencil, but was shown to be weak when the interpolation points were located near the boundary of the stencil. The complete high-order overset grid method was validated for several fluid flow problems including flat-plate boundary-layer flow, an inviscid convecting vortex, and the unsteady flow past a circular cylinder at a low Reynolds number. Results indicate that second-order interpolation was insufficient to maintain a high-order rate of grid convergence, and that explicit high-order interpolation methods are superior to optimized, implicit or B

  6. Interpretation, compilation and field verification procedures in the CARETS project

    USGS Publications Warehouse

    Alexander, Robert H.; De Forth, Peter W.; Fitzpatrick, Katherine A.; Lins, Harry F.; McGinty, Herbert K.

    1975-01-01

    The production of the CARETS map data base involved the development of a series of procedures for interpreting, compiling, and verifying data obtained from remote sensor sources. Level II land use mapping from high-altitude aircraft photography at a scale of 1:100,000 required production of a photomosaic mapping base for each of the 48, 50 x 50 km sheets, and the interpretation and coding of land use polygons on drafting film overlays. CARETS researchers also produced a series of 1970 to 1972 land use change overlays, using the 1970 land use maps and 1972 high-altitude aircraft photography. To enhance the value of the land use sheets, researchers compiled series of overlays showing cultural features, county boundaries and census tracts, surface geology, and drainage basins. In producing Level I land use maps from Landsat imagery, at a scale of 1:250,000, interpreters overlaid drafting film directly on Landsat color composite transparencies and interpreted on the film. They found that such interpretation involves pattern and spectral signature recognition. In studies using Landsat imagery, interpreters identified numerous areas of change but also identified extensive areas of "false change," where Landsat spectral signatures but not land use had changed.

  7. Compilation of DNA sequences of Escherichia coli (update 1992)

    PubMed Central

    Kröger, Manfred; Wahl, Ralf; Schachtel, Gabriel; Rice, Peter

    1992-01-01

    We have compiled the DNA sequence data for E.coli available from the GENBANK and EMBL data libraries and over a period of several years independently from the literature. This is the fourth listing replacing and increasing the former listings substantially. However, in order to save space this printed version contains DNA sequence information only, if they are publically available in electronic form. The complete compilation including a full set of genetic map data and the E.coli protein index can be obtained in machine readable form from the EMBL data library (ECD release 10) or from the CD-ROM version of this supplement issue directly. After deletion of all detected overlaps a total of 1 820 237 individual bp is found to be determined till the beginning of 1992. This corresponds to a total of 38.56% of the entire E.coli chromosome consisting of about 4,720 kbp. This number may actually be higher by some extra 2,5% derived from lysogenic bacteriophage lambda and various DNA sequences already received for other strains of E.coli. PMID:1598239

  8. Compilation of DNA sequences of Escherichia coli (update 1993).

    PubMed Central

    Kröger, M; Wahl, R; Rice, P

    1993-01-01

    We have compiled the DNA sequence data for E. coli available from the GENBANK and EMBL data libraries and over a period of several years independently from the literature. This is the fifth listing replacing and increasing the former listings substantially. However, in order to save space this printed version contains DNA sequence information only, if they are publically available in electronic form. The complete compilation including a full set of genetic map data and the E. coli protein index can be obtained in machine readable form from the EMBL data library (ECD release 15) as a part of the CD-ROM issue of the EMBL sequence database, released and updated every three months. After deletion of all detected overlaps a total of 2,353,635 individual bp is found to be determined till the end of April 1993. This corresponds to a total of 49.87% of the entire E. coli chromosome consisting of about 4,720 kbp. This number may actually be higher by 9161 bp derived from other strains of E. coli. PMID:8332520

  9. Global Flood and Landslide Catalog: Compilation and Applications

    NASA Astrophysics Data System (ADS)

    Adhikari, P.; Hong, Y.; Kirschbaum, D. B.; Adler, R. F.

    2009-12-01

    A global digitized inventory of floods is needed for assessing the spatial distribution and temporal trends of flood hazards and for evaluating flood prediction models. This study describes the development of a global flood catalog compiled from news reports, scholarly articles, remote sensing images, and other natural hazard databases. The events cataloged in the inventory include information on the geographic location, date, duration, affected population, information source, and a qualitative measure of the event’s magnitude and location accuracy. To minimize biases we cross-checked the catalog with different sources and eliminated any redundancies. This research presents the compilation methodology used to develop a digitized Global Flood Inventory (GFI) for the period 1998-2008. This global flood inventory differs from other flood catalogs by providing a publicly available spreadsheet that details all events with both descriptive and digitized information. This inventory has been proven useful for mapping natural hazards over the globe and for evaluating flood prediction model. This global flood inventory research compliments our companion catalog: global landslide hazards.

  10. Creep of water ices at planetary conditions: A compilation

    USGS Publications Warehouse

    Durham, W.B.; Kirby, S.H.; Stern, L.A.

    1997-01-01

    Many constitutive laws for the flow of ice have been published since the advent of the Voyager explorations of the outer solar system. Conflicting data have occasionally come from different laboratories, and refinement of experimental techniques has led to the publication of laws that supersede earlier ones. In addition, there are unpublished data from ongoing research that also amend the constitutive laws. Here we compile the most current laboratory-derived flow laws for water ice phases I, II, III, V, and VI, and ice I mixtures with hard particulates. The rheology of interest is mainly that of steady state, and the conditions reviewed are the pressures and temperatures applicable to the surfaces and interiors of icy moons of the outer solar system. Advances in grain-size-dependent creep in ices I and II as well as in phase transformations and metastability under differential stress are also included in this compilation. At laboratory strain rates the several ice polymorphs are rheologically distinct in terms of their stress, temperature, and pressure dependencies but, with the exception of ice III, have fairly similar strengths. Hard particulates strengthen ice I significantly only at high particulate volume fractions. Ice III has the potential for significantly affecting mantle dynamics because it is much weaker than the other polymorphs and its region of stability, which may extend metastably well into what is nominally the ice II field, is located near likely geotherms of large icy moons. Copyright 1997 by the American Geophysical Union.

  11. An empirical study of FORTRAN programs for parallelizing compilers

    NASA Technical Reports Server (NTRS)

    Shen, Zhiyu; Li, Zhiyuan; Yew, Pen-Chung

    1990-01-01

    Some results are reported from an empirical study of program characteristics that are important in parallelizing compiler writers, especially in the area of data dependence analysis and program transformations. The state of the art in data dependence analysis and some parallel execution techniques are examined. The major findings are included. Many subscripts contain symbolic terms with unknown values. A few methods of determining their values at compile time are evaluated. Array references with coupled subscripts appear quite frequently; these subscripts must be handled simultaneously in a dependence test, rather than being handled separately as in current test algorithms. Nonzero coefficients of loop indexes in most subscripts are found to be simple: they are either 1 or -1. This allows an exact real-valued test to be as accurate as an exact integer-valued test for one-dimensional or two-dimensional arrays. Dependencies with uncertain distance are found to be rather common, and one of the main reasons is the frequent appearance of symbolic terms with unknown values.

  12. National Energy Strategy: A compilation of public comments; Interim Report

    SciTech Connect

    Not Available

    1990-04-01

    This Report presents a compilation of what the American people themselves had to say about problems, prospects, and preferences in energy. The Report draws on the National Energy Strategy public hearing record and accompanying documents. In all, 379 witnesses appeared at the hearings to exchange views with the Secretary, Deputy Secretary, and Deputy Under Secretary of Energy, and Cabinet officers of other Federal agencies. Written submissions came from more than 1,000 individuals and organizations. Transcripts of the oral testimony and question-and-answer (Q-and-A) sessions, as well as prepared statements submitted for the record and all other written submissions, form the basis for this compilation. Citations of these sources in this document use a system of identifying symbols explained below and in the accompanying box. The Report is organized into four general subject areas concerning: (1) efficiency in energy use, (2) the various forms of energy supply, (3) energy and the environment, and (4) the underlying foundations of science, education, and technology transfer. Each of these, in turn, is subdivided into sections addressing specific topics --- such as (in the case of energy efficiency) energy use in the transportation, residential, commercial, and industrial sectors, respectively. 416 refs., 44 figs., 5 tabs.

  13. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  14. Parallel Object-Oriented Framework Optimization

    SciTech Connect

    Quinlan, D

    2001-05-01

    Object-oriented libraries arise naturally from the increasing complexity of developing related scientific applications. The optimization of the use of libraries within scientific applications is one of many high-performance optimization, and is the subject of this paper. This type of optimization can have significant potential because it can either reduce the overhead of calls to a library, specialize the library calls given the context of their use within the application, or use the semantics of the library calls to locally rewrite sections of the application. This type of optimization is only now becoming an active area of research. The optimization of the use of libraries within scientific applications is particularly attractive because it maps to the extensive use of libraries within numerous large existing scientific applications sharing common problem domains. This paper presents an approach toward the optimization of parallel object-oriented libraries. ROSE[1] is a tool for building source-to-source preprocessors, ROSETTA is a tool for defining the grammars used within ROSE. The definition of the grammars directly determines what can be recognized at compile time. ROSETTA permits grammars to be automatically generated which are specific to the identification of abstractions introduced within object-oriented libraries. Thus the semantics of complex abstractions defined outside of the C++ language can be leveraged at compile time to introduce library specific optimizations. The details of the optimizations performed are not a part of this paper and are up to the library developer to define using ROSETTA and ROSE to build such an optimizing preprocessor. Within performance optimizations, if they are to be automated, the problems of automatically locating where such optimizations can be done are significant and most often overlooked. Note that a novel part of this work is the degree of automation. Thus library developers can be expected to be able to build their

  15. Modular implementation of a digital hardware design automation system

    NASA Astrophysics Data System (ADS)

    Masud, M.

    An automation system based on AHPL (A Hardware Programming Language) was developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) illustration of how the compiler may be used to support several phases of design activities. Several new features were added to AHPL. These include: application-dependent parameters, mutliple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1)BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems.

  16. Solid phase microextraction coupled with comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry for high-resolution metabolite profiling in apples: implementation of structured separations for optimization of sample preparation procedure in complex samples.

    PubMed

    Risticevic, Sanja; DeEll, Jennifer R; Pawliszyn, Janusz

    2012-08-17

    Metabolomics currently represents one of the fastest growing high-throughput molecular analysis platforms that refer to the simultaneous and unbiased analysis of metabolite pools constituting a particular biological system under investigation. In response to the ever increasing interest in development of reliable methods competent with obtaining a complete and accurate metabolomic snapshot for subsequent identification, quantification and profiling studies, the purpose of the current investigation is to test the feasibility of solid phase microextraction for advanced fingerprinting of volatile and semivolatile metabolites in complex samples. In particular, the current study is focussed on the development and optimization of solid phase microextraction (SPME) - comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC × GC-ToFMS) methodology for metabolite profiling of apples (Malus × domestica Borkh.). For the first time, GC × GC attributes in terms of molecular structure-retention relationships and utilization of two-dimensional separation space on orthogonal GC × GC setup were exploited in the field of SPME method optimization for complex sample analysis. Analytical performance data were assessed in terms of method precision when commercial coatings are employed in spiked metabolite aqueous sample analysis. The optimized method consisted of the implementation of direct immersion SPME (DI-SPME) extraction mode and its application to metabolite profiling of apples, and resulted in a tentative identification of 399 metabolites and the composition of a metabolite database far more comprehensive than those obtainable with classical one-dimensional GC approaches. Considering that specific metabolome constituents were for the first time reported in the current study, a valuable approach for future advanced fingerprinting studies in the field of fruit biology is proposed. The current study also intensifies the understanding of SPME

  17. Optimal Jet Finder

    NASA Astrophysics Data System (ADS)

    Grigoriev, D. Yu.; Jankowski, E.; Tkachov, F. V.

    2003-09-01

    We describe a FORTRAN 77 implementation of the optimal jet definition for identification of jets in hadronic final states of particle collisions. We discuss details of the implementation, explain interface subroutines and provide a usage example. The source code is available from http://www.inr.ac.ru/~ftkachov/projects/jets/. Program summaryTitle of program: Optimal Jet Finder (OJF_014) Catalogue identifier: ADSB Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSB Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: Any computer with the FORTRAN 77 compiler Tested with: g77/Linux on Intel, Alpha and Sparc; Sun f77/Solaris (thwgs.cern.ch); xlf/AIX (rsplus.cern.ch); MS Fortran PowerStation 4.0/Win98 Programming language used: FORTRAN 77 Memory required: ˜1 MB (or more, depending on the settings) Number of bytes in distributed program, including examples and test data: 251 463 Distribution format: tar gzip file Keywords: Hadronic jets, jet finding algorithms Nature of physical problem: Analysis of hadronic final states in high energy particle collision experiments often involves identification of hadronic jets. A large number of hadrons detected in the calorimeter is reduced to a few jets by means of a jet finding algorithm. The jets are used in further analysis which would be difficult or impossible when applied directly to the hadrons. Grigoriev et al. [ hep-ph/0301185] provide a brief introduction to the subject of jet finding algorithms and a general review of the physics of jets can be found in [Rep. Prog. Phys. 36 (1993) 1067]. Method of solution: The software we provide is an implementation of the so-called optimal jet definition ( OJD). The theory of OJD was developed by Tkachov [Phys. Rev. Lett. 73 (1994) 2405; 74 (1995) 2618; Int. J. Mod. Phys. A 12 (1997) 5411; 17 (2002) 2783]. The desired jet configuration is obtained as the one that minimizes Ω R, a certain function of the input particles and jet

  18. SU-E-T-09: A Clinical Implementation and Optimized Dosimetry Study of Freiberg Flap Skin Surface Treatment in High Dose Rate Brachytherapy

    SciTech Connect

    Syh, J; Syh, J; Patel, B; Wu, H; Durci, M

    2015-06-15

    Purpose: This case study was designated to confirm the optimized plan was used to treat skin surface of left leg in three stages. 1. To evaluate dose distribution and plan quality by alternating of the source loading catheters pattern in flexible Freiberg Flap skin surface (FFSS) applicator. 2. To investigate any impact on Dose Volume Histogram (DVH) of large superficial surface target volume coverage. 3. To compare the dose distribution if it was treated with electron beam. Methods: The Freiburg Flap is a flexible mesh style surface mold for skin radiation or intraoperative surface treatments. The Freiburg Flap consists of multiple spheres that are attached to each other, holding and guiding up to 18 treatment catheters. The Freiburg Flap also ensures a constant distance of 5mm from the treatment catheter to the surface. Three treatment trials with individual planning optimization were employed: 18 channels, 9 channels of FF and 6 MeV electron beam. The comparisons were highlighted in target coverage, dose conformity and dose sparing of surrounding tissues. Results: The first 18 channels brachytherapy plan was generated with 18 catheters inside the skin-wrapped up flap (Figure 1A). A second 9 catheters plan was generated associated with the same calculation points which were assigned to match prescription for target coverage as 18 catheters plan (Figure 1B). The optimized inverse plan was employed to reduce the dose to adjacent structures such as tibia or fibula. The comparison of DVH’s was depicted on Figure 2. External beam of electron RT plan was depicted in Figure 3. Overcall comparisons among these three were illustrated in Conclusion: The 9-channel Freiburg flap flexible skin applicator offers a reasonably acceptable plan without compromising the coverage. Electron beam was discouraged to use to treat curved skin surface because of low target coverage and high dose in adjacent tissues.

  19. Discovery of benzimidazole-diamide finger loop (Thumb Pocket I) allosteric inhibitors of HCV NS5B polymerase: Implementing parallel synthesis for rapid linker optimization.

    PubMed

    Goulet, Sylvie; Poupart, Marc-André; Gillard, James; Poirier, Martin; Kukolj, George; Beaulieu, Pierre L

    2010-01-01

    Previously described SAR of benzimidazole-based non-nucleoside finger loop (Thumb Pocket I) inhibitors of HCV NS5B polymerase was expanded. Prospecting studies using parallel synthesis techniques allowed the rapid identification of novel cinnamic acid right-hand sides that provide renewed opportunities for further optimization of these inhibitors. Novel diamide derivatives such as 44 exhibited comparable potency (enzymatic and cell-based HCV replicon) as previously described tryptophan-based inhibitors but physicochemical properties (e.g., aqueous solubility and lipophilicity) have been improved, resulting in molecules with reduced off-target liabilities (CYP inhibition) and increased metabolic stability.

  20. HPF Implementation of NPB2.3

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Jin, Hao-Qiang; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present the HPF implementation of BT, SP, LU, FT, and MG of NPB2.3-serial benchmark set, The implementation is based on HPF performance model of the benchmark specific operations with distributed arrays. We present profiling and performance data on SGI origin 2000 and compare the results with NPB2.3. We discuss advantages and limitations of HPF and pghpf compiler.

  1. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  2. The design and implementation of a parallel unstructured Euler solver using software primitives

    NASA Technical Reports Server (NTRS)

    Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.

    1992-01-01

    This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.

  3. Design and implementation of a parallel unstructured Euler solver using software primitives

    NASA Technical Reports Server (NTRS)

    Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.

    1994-01-01

    This paper is concerned with the implementation of a three-dimensional unstructured-grid Euler solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented to accelerate the parallel computational rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effects of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of 3 performance improvement. The overall solution efficiency is compared with that obtained on the Cray Y-MP vector supercomputer.

  4. Cameron - Optimized Compilation of Visual Programs for Image Processing on Adaptive Computing Systems (ACS)

    DTIC Science & Technology

    2002-01-01

    the Cameron project. The goal of the Cameron project is to make FPGAs and other adaptive computer systems available to more applications programmers...loops onto an FPGA , but this is invisible. SA-C therefore makes recon- gurable processors accessible to applications programmers with no hardware...happens that for SA-C programs, the host executable off-loads the processing of loops onto an FPGA , but this is invisible. SA-C therefore makes

  5. Designing of High-Volume PET/CT Facility with Optimal Reduction of Radiation Exposure to the Staff: Implementation and Optimization in a Tertiary Health Care Facility in India

    PubMed Central

    Jha, Ashish Kumar; Singh, Abhijith Mohan; Mithun, Sneha; Shah, Sneha; Agrawal, Archi; Purandare, Nilendu C.; Shetye, Bhakti; Rangarajan, Venkatesh

    2015-01-01

    Positron emission tomography (PET) has been in use for a few decades but with its fusion with computed tomography (CT) in 2001, the new PET/CT integrated system has become very popular and is now a key influential modality for patient management in oncology. However, along with its growing popularity, a growing concern of radiation safety among the radiation professionals has become evident. We have judiciously developed a PET/CT facility with optimal shielding, along with an efficient workflow to perform high volume procedures and minimize the radiation exposure to the staff and the general public by reducing unnecessary patient proximity to the staff and general public. PMID:26420990

  6. COMPILATION OF LABORATORY SCALE ALUMINUM WASH AND LEACH REPORT RESULTS

    SciTech Connect

    HARRINGTON SJ

    2011-01-06

    This report compiles and analyzes all known wash and caustic leach laboratory studies. As further data is produced, this report will be updated. Included are aluminum mineralogical analysis results as well as a summation of the wash and leach procedures and results. Of the 177 underground storage tanks at Hanford, information was only available for five individual double-shell tanks, forty-one individual single-shell tanks (e.g. thirty-nine 100 series and two 200 series tanks), and twelve grouped tank wastes. Seven of the individual single-shell tank studies provided data for the percent of aluminum removal as a function of time for various caustic concentrations and leaching temperatures. It was determined that in most cases increased leaching temperature, caustic concentration, and leaching time leads to increased dissolution of leachable aluminum solids.

  7. Compilation of Sandia coal char combustion data and kinetic analyses

    SciTech Connect

    Mitchell, R.E.; Hurt, R.H.; Baxter, L.L.; Hardesty, D.R.

    1992-06-01

    An experimental project was undertaken to characterize the physical and chemical processes that govern the combustion of pulverized coal chars. The experimental endeavor establishes a database on the reactivities of coal chars as a function of coal type, particle size, particle temperature, gas temperature, and gas and composition. The project also provides a better understanding of the mechanism of char oxidation, and yields quantitative information on the release rates of nitrogen- and sulfur-containing species during char combustion. An accurate predictive engineering model of the overall char combustion process under technologically relevant conditions in a primary product of this experimental effort. This document summarizes the experimental effort, the approach used to analyze the data, and individual compilations of data and kinetic analyses for each of the parent coals investigates.

  8. World petroleum assessment 2000; compiled PowerPoint slides

    USGS Publications Warehouse

    Ahlbrandt, Thomas S.

    2001-01-01

    The slides in this compilation have been produced for a number of presentations on the World Petroleum Assessment 20000. Many of the figures are taken directly form the publication "U.S. Geological Survey World Petroleum Assessment 2000" - Description and Results: USGS Digital Data Series DDS-60, 2000. Some of the slides are modifications of figures from DDS-60, some are new descriptive slides, and a few are new slides. Several of the slides appear to be duplicates, but in fact are slight modifications for format or content from the same image. Forty-one people participated in this effort as part of the World Energy Assessment Team. The full list of contributors is given ion DDS-60. 

  9. Compiler-Enhanced Incremental Checkpointing for OpenMP Applications

    SciTech Connect

    Bronevetsky, G; Marques, D; Pingali, K; McKee, S; Rugina, R

    2009-02-18

    As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, significantly reduces checkpoint sizes and enables asynchronous checkpointing.

  10. Global Seismicity: Three New Maps Compiled with Geographic Information Systems

    NASA Technical Reports Server (NTRS)

    Lowman, Paul D., Jr.; Montgomery, Brian C.

    1996-01-01

    This paper presents three new maps of global seismicity compiled from NOAA digital data, covering the interval 1963-1998, with three different magnitude ranges (mb): greater than 3.5, less than 3.5, and all detectable magnitudes. A commercially available geographic information system (GIS) was used as the database manager. Epicenter locations were acquired from a CD-ROM supplied by the National Geophysical Data Center. A methodology is presented that can be followed by general users. The implications of the maps are discussed, including the limitations of conventional plate models, and the different tectonic behavior of continental vs. oceanic lithosphere. Several little-known areas of intraplate or passive margin seismicity are also discussed, possibly expressing horizontal compression generated by ridge push.

  11. Southwest Indian Ocean Bathymetric Compilation (swIOBC)

    NASA Astrophysics Data System (ADS)

    Jensen, L.; Dorschel, B.; Arndt, J. E.; Jokat, W.

    2014-12-01

    As result of long-term scientific activities in the southwest Indian Ocean, an extensive amount of swath bathymetric data has accumulated in the AWI database. Using this data as a backbone, supplemented by additional bathymetric data sets and predicted bathymetry, we generate a comprehensive regional bathymetric data compilation for the southwest Indian Ocean. A high resolution bathymetric chart of this region will support geological and climate research: Identification of current-induced seabed structures will help modelling oceanic currents and, thus, provide proxy information about the paleo-climate. Analysis of the sediment distribution will contribute to reconstruct the erosional history of Eastern Africa. The aim of swIOBC is to produce a homogeneous and seamless bathymetric grid with an associated meta-database and a corresponding map for the area from 5° to 39° S and 20° to 44° E. Recently, multibeam data with a track length of approximately 86,000 km are held in-house. In combination with external echosounding data this allows for the generation of a regional grid, significantly improving the existing, mostly satellite altimetry derived, bathymetric models. The collected data sets are heterogeneous in terms of age, acquisition system, background data, resolution, accuracy, and documentation. As a consequence, the production of a bathymetric grid requires special techniques and algorithms, which were already developed for the IBCAO (Jakobsson et al., 2012) and further refined for the IBCSO (Arndt et al., 2013). The new regional southwest Indian Ocean chart will be created based on these methods. Arndt, J.E., et al., 2013. The International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0—A new bathymetric compilation covering circum-Antarctic waters. GRL 40, 1-7, doi: 10.1002/grl.50413, 2013. Jakobsson, M., et al., 2012. The International Bathymetric Chart of the Arctic Ocean (IBCAO) Version 3.0. GRL 39, L12609, doi: 10.1029/2012GL052219.

  12. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGES

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; ...

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  13. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    SciTech Connect

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; Liu, Yilu

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation method can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.

  14. Harmonic analysis and FPGA implementation of SHE controlled three phase CHB 11-level inverter in MV drives using deterministic and stochastic optimization techniques.

    PubMed

    Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu

    2013-01-01

    With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.

  15. Design and implementation of three-dimensional ring-scanning equipment for optimized measurements of near-infrared diffuse optical breast imaging

    NASA Astrophysics Data System (ADS)

    Yu, Jhao-Ming; Pan, Min-Cheng; Hsu, Ya-Fen; Chen, Liang-Yu; Pan, Min-Chun

    2015-07-01

    We propose and implement three-dimensional (3-D) ring-scanning equipment for near-infrared (NIR) diffuse optical imaging to screen breast tumors under prostrating examination. This equipment has the function of the radial, circular, and vertical motion without compression of breast tissue, thereby achieving 3-D scanning; furthermore, a flexible combination of illumination and detection can be configured for the required resolution. Especially, a rotation-sliding-and-moving mechanism was designed for the guidance of source- and detection-channel motion. Prior to machining and construction of the system, a synthesized image reconstruction was simulated to show the feasibility of this 3-D NIR ring-scanning equipment; finally, this equipment is verified by performing phantom experiments. Rather than the fixed configuration, this addressed screening/diagnosing equipment has the flexibilities of optical-channel expansion for spatial resolution and the dimensional freedom for scanning in reconstructing optical-property images.

  16. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  17. Admission and capacity planning for the implementation of one-stop-shop in skin cancer treatment using simulation-based optimization.

    PubMed

    Romero, H L; Dellaert, N P; van der Geer, S; Frunt, M; Jansen-Vullers, M H; Krekels, G A M

    2013-03-01

    Hospitals and health care institutions are facing the challenge of improving the quality of their services while reducing their costs. The current study presents the application of operations management practices in a dermatology oncology outpatient clinic specialized in skin cancer treatment. An interesting alternative considered by the clinic is the implementation of a one-stop-shop concept for the treatment of new patients diagnosed with basal cell carcinoma. This alternative proposes a significant improvement in the average waiting time that a patient spends between the diagnosis and treatment. This study is focused on the identification of factors that influence the average throughput time of patients treated in the clinic from the logistic perspective. A two-phase approach was followed to achieve the goals stated in this study. The first phase included an integrated approach for the deterministic analysis of the capacity using a demand-supply model for the hospital processes, while the second phase involved the development of a simulation model to include variability to the activities involved in the process and to evaluate different scenarios. Results showed that by managing three factors: the admission rule, resources allocation and capacity planning in the dermato-oncology unit throughput times for treatments of new patients can be decreased with more than 90 %, even with the same resource level. Finally, a pilot study with 16 patients was also conducted to evaluate the impact of implementing the one stop shop concept from a clinical perspective. Patients turned out to be satisfied with the fast diagnosis and treatment.

  18. Ada Compiler Validation Summary Report: Certificate Number: 940325S1. 11353 DDC-I, DACS Sun SPARC/Solaris to Pentium PM Bare Ada Cross Compiler System with Rate Monotonic Scheduling, Version 4.6.4 Sun SPARCclassic => Intel Pentium (operated as Bare Machine) based in Xpress Desktop (Intel product number: XBASE6E4F-B)

    DTIC Science & Technology

    1994-04-11

    Sun SPARC/Solaris to Pentium PM Bare Ada Cross Compiler System with Rate Monotonic Scheduling, Version 4.6.4 6. iAutnors: National Institute of...ýdistribution unlimitedJ 13. (Maghmum 200 Host: Sun SPARClassic (under-Solaris, Release 2.1) Target: Intel Xpress Desktop (product number XBASE6E4F-B, with...Ada implementation was tested and determined to pass ACVC 1.11. Testing was completed on March 25, 1994. Compiler Name and Version: DACS Sun SPARC

  19. Optimizing Interactive Development of Data-Intensive Applications

    PubMed Central

    Interlandi, Matteo; Tetali, Sai Deep; Gulzar, Muhammad Ali; Noor, Joseph; Condie, Tyson; Kim, Miryung; Millstein, Todd

    2017-01-01

    Modern Data-Intensive Scalable Computing (DISC) systems are designed to process data through batch jobs that execute programs (e.g., queries) compiled from a high-level language. These programs are often developed interactively by posing ad-hoc queries over the base data until a desired result is generated. We observe that there can be significant overlap in the structure of these queries used to derive the final program. Yet, each successive execution of a slightly modified query is performed anew, which can significantly increase the development cycle. Vega is an Apache Spark framework that we have implemented for optimizing a series of similar Spark programs, likely originating from a development or exploratory data analysis session. Spark developers (e.g., data scientists) can leverage Vega to significantly reduce the amount of time it takes to re-execute a modified Spark program, reducing the overall time to market for their Big Data applications.

  20. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    SciTech Connect

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh; Manzano Franco, Joseph B.; Tumeo, Antonino

    2015-05-20

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) { on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.

  1. Compilation of PRF Canyon Floor Pan Sample Analysis Results

    SciTech Connect

    Pool, Karl N.; Minette, Michael J.; Wahl, Jon H.; Greenwood, Lawrence R.; Coffey, Deborah S.; McNamara, Bruce K.; Bryan, Samuel A.; Scheele, Randall D.; Delegard, Calvin H.; Sinkov, Sergey I.; Soderquist, Chuck Z.; Fiskum, Sandra K.; Brown, Garrett N.; Clark, Richard A.

    2016-06-30

    On September 28, 2015, debris collected from the PRF (236-Z) canyon floor, Pan J, was observed to exhibit chemical reaction. The material had been transferred from the floor pan to a collection tray inside the canyon the previous Friday. Work in the canyon was stopped to allow Industrial Hygiene to perform monitoring of the material reaction. Canyon floor debris that had been sealed out was sequestered at the facility, a recovery plan was developed, and drum inspections were initiated to verify no additional reactions had occurred. On October 13, in-process drums containing other Pan J material were inspected and showed some indication of chemical reaction, limited to discoloration and degradation of inner plastic bags. All Pan J material was sealed back into the canyon and returned to collection trays. Based on the high airborne levels in the canyon during physical debris removal, ETGS (Encapsulation Technology Glycerin Solution) was used as a fogging/lock-down agent. On October 15, subject matter experts confirmed a reaction had occurred between nitrates (both Plutonium Nitrate and Aluminum Nitrate Nonahydrate (ANN) are present) in the Pan J material and the ETGS fixative used to lower airborne radioactivity levels during debris removal. Management stopped the use of fogging/lock-down agents containing glycerin on bulk materials, declared a Management Concern, and initiated the Potential Inadequacy in the Safety Analysis determination process. Additional drum inspections and laboratory analysis of both reacted and unreacted material are planned. This report compiles the results of many different sample analyses conducted by the Pacific Northwest National Laboratory on samples collected from the Plutonium Reclamation Facility (PRF) floor pans by the CH2MHill’s Plateau Remediation Company (CHPRC). Revision 1 added Appendix G that reports the results of the Gas Generation Rate and methodology. The scope of analyses requested by CHPRC includes the determination of

  2. Ada compiler validation summary report. Certificate number: 891116W1. 10191. Intel Corporation, IPSC/2 Ada, Release 1. 1, IPSC/2 parallel supercomputer, system resource manager host and IPSC/2 parallel supercomputer, CX-1 nodes target

    SciTech Connect

    Not Available

    1989-11-16

    This VSR documents the results of the validation testing performed on an Ada compiler. Testing was carried out for the following purposes: To attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; To attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; and To determine that the implementation-dependent behavior is allowed by the Ada Standard. Testing of this compiler was conducted by SofTech, Inc. under the direction of he AVF according to procedures established by the Ada Joint Program Office and administered by the Ada Validation Organization (AVO). On-side testing was completed 16 November 1989 at Aloha OR.

  3. The development of a multi-target compiler-writing system for flight software development

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Donegan, M. K.

    1977-01-01

    A wide variety of systems designed to assist the user in the task of writing compilers has been developed. A survey of these systems reveals that none is entirely appropriate to the purposes of the MUST project, which involves the compilation of one or at most a small set of higher-order languages to a wide variety of target machines offering little or no software support. This requirement dictates that any compiler writing system employed must provide maximal support in the areas of semantics specification and code generation, the areas in which existing compiler writing systems as well as theoretical underpinnings are weakest. This paper describes an ongoing research and development effort to create a compiler writing system which will overcome these difficulties, thus providing a software system which makes possible the fast, trouble-free creation of reliable compilers for a wide variety of target computers.

  4. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  5. L Chondrite meteorites: A compilation and preliminary analyses

    NASA Technical Reports Server (NTRS)

    Silliman, A.

    1984-01-01

    A compilation of those meteorites currently recognized as being L chondrites, exclusive of the numerous Antarctica finds, was made and is known as the L Chondrite Register. Data for these 576 meteorites was collected from a variety of sources, primarily the British Museum's Catalogue of Meteorites and the Appendix to the Catalogue of Meteorites. Also used was the Revised Cambridge Chondrite Compendium, which provided a convenient listing of L chondrites; other sources include Chinese Meteorites, Meteorites, by Wasson (1974), and the Meteoritical Bulletin of Meteoritics. This last source provided data for most recent falls and was referenced through March of 1982. All such data were recorded on a computer data file with a HP 2647A terminal, so that information could easily be retrieved and manipulated. For each meteorite, the petrographic class, location of find, fall date and hour, mass, mole per cent fayalite, weight per cent Fe, SiO2/MgO ratio, shock class, metal class, 4He abundance, UTh-H3 gas retention age, K-Ar gas retention age, and 21Ne cosmic ray exposure age, was recorded when known.

  6. Mars Pathfinder and Mars Global Surveyor Outreach Compilation

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This videotape is a compilation of the best NASA JPL (Jet Propulsion Laboratory) videos of the Mars Pathfinder and Mars Global Surveyor missions. The mission is described using animation and narration as well as some actual footage of the entire sequence of mission events. Included within these animations are the spacecraft orbit insertion; descent to the Mars surface; deployment of the airbags and instruments; and exploration by Sojourner, the Mars rover. JPL activities at spacecraft control during significant mission events are also included at the end. The spacecraft cameras pan the surrounding Mars terrain and film Sojourner traversing the surface and inspecting rocks. A single, brief, processed image of the Cydonia region (Mars face) at an oblique angle from the Mars Global Surveyor is presented. A description of the Mars Pathfinder mission, instruments, landing and deployment process, Mars approach, spacecraft orbit insertion, rover operation are all described using computer animation. Actual color footage of Sojourner as well as a 360 deg pan of the Mars terrain surrounding the spacecraft is provided. Lower quality black and white photography depicting Sojourner traversing the Mars surface and inspecting Martian rocks also is included.

  7. Compilation and analyses of aquifer performance tests in eastern Kansas

    USGS Publications Warehouse

    Reed, T.B.; Burnett, R.D.

    1985-01-01

    Selected aquifer-test data from 36 counties in eastern Kansas were collected from numerous sources and publications in order to produce a documented compilation of aquifer tests in one report. Data were obtained chiefly from private consulting firms and from government agencies. Hydraulic properties determined included transmissivity, storage coefficient (where observation well was available), and in some cases hydraulic properties of a confining layer. The aquifers tested comprised three main types of rocks--consolidated rock deposits, glacial deposits, and alluvial deposits that include the ' Equus beds, ' an extensive alluvial deposit in south-central Kansas. The Theis recovery equation and the Cooper-Jacob modified nonequilibrium equation were the two principal solution methods used. Other methods used included the Theis nonequilibrium equation, the Hantush-Jacob equation for a leaky confined aquifer, Hantush 's modified leaky equation in which storage from a confining layer was considered, the Boulton 's delayed-yield equation. Additionally, a specific-capacity method of estimating transmissivity was used when only a single drawdown value was available. (USGS)

  8. Compiler-Assisted Detection of Transient Memory Errors

    SciTech Connect

    Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-06-09

    The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.

  9. Compilation and Production of a Karst map of Mexico

    NASA Astrophysics Data System (ADS)

    Gutierrez, R.

    2008-12-01

    Twenty percent of Mexico's territory consists of karst terrain, which has been classified in two main provinces: one is the peninsula of Yucatan and the mountain systems of Chiapas and the other karst bodies are distributed in the Sierra Madre del Sur and Sierra Madre Oriental. These karst bodies developed in rocks of Triassic and Cretaceous age. Due to the importance of karst features in Mexico, the International Association of Hydrogeologists through the Karst Comition requested the collaboration of the Institute of Geology, UNAM, to compile a map of soluble surface rocks in Mexico. This work seeks to classify and georeference karst locations and generate a database to link these sites with karst geological formations and lithologies, as well as geomorphological processes that have contributed to the formation of the karst bodies. Also, we want to provide a basis for cave and karst research and for management of karst resources. In order to produce a national karst map in digital form, the tools being used are: the geographical information system Arc View and Google Earth, with the support of scientific journals, hiking and speleological documents, and 1:50000 scale geological maps of Mexico elaborated by the Institute of Geology, UNAM.

  10. An Atmospheric General Circulation Model with Chemistry for the CRAY T3E: Design, Performance Optimization and Coupling to an Ocean Model

    NASA Technical Reports Server (NTRS)

    Farrara, John D.; Drummond, Leroy A.; Mechoso, Carlos R.; Spahr, Joseph A.

    1998-01-01

    The design, implementation and performance optimization on the CRAY T3E of an atmospheric general circulation model (AGCM) which includes the transport of, and chemical reactions among, an arbitrary number of constituents is reviewed. The parallel implementation is based on a two-dimensional (longitude and latitude) data domain decomposition. Initial optimization efforts centered on minimizing the impact of substantial static and weakly-dynamic load imbalances among processors through load redistribution schemes. Recent optimization efforts have centered on single-node optimization. Strategies employed include loop unrolling, both manually and through the compiler, the use of an optimized assembler-code library for special function calls, and restructuring of parts of the code to improve data locality. Data exchanges and synchronizations involved in coupling different data-distributed models can account for a significant fraction of the running time. Therefore, the required scattering and gathering of data must be optimized. In systems such as the T3E, there is much more aggregate bandwidth in the total system than in any particular processor. This suggests a distributed design. The design and implementation of a such distributed 'Data Broker' as a means to efficiently couple the components of our climate system model is described.

  11. POET: Parameterized Optimization for Empirical Tuning

    SciTech Connect

    Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D

    2007-01-29

    The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.

  12. Global compilation of coastline change at river mouths

    NASA Astrophysics Data System (ADS)

    Aadland, Tore; Helland-Hansen, William

    2016-04-01

    We are using Google Earth Engine to analyze Landsat images to create a global compilation of coastline change at river mouths in order to develop scaling relationships between catchment properties and shoreline behaviour. Our main motivation for doing this is to better understand the rates at which shallowing upward successions of deltaic successions are formed. We are also interested in getting an insight into the impact of climate change and human activity on modern shorelines. Google Earth Engine is a platform that offers simple selection of relevant data from an extensive catalog of geospatial data and the tools to analyse it efficiently. We have used Google Earth Engine to select and analyze temporally and geographically bounded sets of Landsat images covering modern deltas included in the Milliman and Farnsworth 2010 database. The part of the shoreline sampled for each delta has been manually defined. The areas depicted in these image sets have been classified as land or water by thresholding a calibrated Modified Normalized Water Index. By representing land and water as 1.0 and 0 respectively and averaging image sets of sufficient size we have generated rasters quantifying the probability of an area being classified as land. The calculated probabilities reflect variation in the shoreline position; in particular, it minimizes the impact of short term-variations produced by tides. The net change in the land area of deltas can be estimated by comparing how the probability changes between image sets spanning different time periods. We have estimated the land area change that occurred from 2000 to 2014 at more than 130 deltas with catchment areas ranging from 470 to 6300000 sqkm. Log-log plots of the land area change of these deltas against their respective catchment properties in the Milliman and Farnsworth 2010 database indicate that the rate of land area change correlates with catchment size and discharge. Useful interpretation of the data requires that we

  13. Compiling Mercury relief map using several data sources

    NASA Astrophysics Data System (ADS)

    Zakharova, M.

    2015-12-01

    There are several data of Mercury topography obtained as the result of processing materials collected by two spacecraft - the Mariner-10 and the MESSENGER during their Mercury flybys.The history of the visual mapping of Mercury begins at the recent times as the first significant observations were made during the latter half of the 20th century, whereas today we have no data with 100% coverage of the entire surface of the Mercury except the global mosaic composed of the images acquired by MESSENGER. The main objective of this work is to provide the first Mercury relief map using all the existing elevation data. The workflow included collecting, combining and processing the existing data and afterwards merging them correctly for one single map compiling. The preference was given to topography data while the global mosaic was used to fill the gaps where there was insufficient topography.The Mercury relief map has been created with the help of four different types of data: - global mosaic with 100% coverage of Mercury's surface created from Messenger orbital images (36% of the final map);- Digital Terrain Models obtained by the treating stereo images made during the Mariner 10's flybys (15% of the map) (Cook and Robinson, 2000);- Digital Terrain Models obtained from images acquired during the Messenger flybys (24% of the map) (F. Preusker et al., 2011);- the data sets produced by the MESSENGER Mercury Laser Altimeter (MLA) (25 % of the map).The final map is created in the Lambert azimuthal Equal area projection and has the scale 1:18 000 000. It represents two hemispheres - western and eastern which are separated by the zero meridian. It mainly shows the hypsometric features of the planet and craters with a diameter more than 200 kilometers.

  14. Assessment of the current status of basic nuclear data compilations

    SciTech Connect

    Riemer, R.L.

    1992-12-31

    The Panel on Basic Nuclear Data Compilations believes that it is important to provide the user with an evaluated nuclear database of the highest quality, dependability, and currency. It is also important that the evaluated nuclear data are easily accessible to the user. In the past the panel concentrated its concern on the cycle time for the publication of A-chain evaluations. However, the panel now recognizes that publication cycle time is no longer the appropriate goal. Sometime in the future, publication of the evaluated A-chains will evolve from the present hard-copy Nuclear Data Sheets on library shelves to purely electronic publication, with the advent of universal access to terminals and the nuclear databases. Therefore, the literature cut-off date in the Evaluated Nuclear Structure Data File (ENSDF) is rapidly becoming the only important measure of the currency of an evaluated A-chain. Also, it has become exceedingly important to ensure that access to the databases is as user-friendly as possible and to enable electronic publication of the evaluated data files. Considerable progress has been made in these areas: use of the on-line systems has almost doubled in the past year, and there has been initial development of tools for electronic evaluation, publication, and dissemination. Currently, the nuclear data effort is in transition between the traditional and future methods of dissemination of the evaluated data. Also, many of the factors that adversely affect the publication cycle time simultaneously affect the currency of the evaluated nuclear database. Therefore, the panel continues to examine factors that can influence cycle time: the number of evaluators, the frequency with which an evaluation can be updated, the review of the evaluation, and the production of the evaluation, which currently exists as a hard-copy issue of Nuclear Data Sheets.

  15. Evaluation and compilation of fission product yields 1993

    SciTech Connect

    England, T.R.; Rider, B.F.

    1995-12-31

    This document is the latest in a series of compilations of fission yield data. Fission yield measurements reported in the open literature and calculated charge distributions have been used to produce a recommended set of yields for the fission products. The original data with reference sources, and the recommended yields axe presented in tabular form. These include many nuclides which fission by neutrons at several energies. These energies include thermal energies (T), fission spectrum energies (F), 14 meV High Energy (H or HE), and spontaneous fission (S), in six sets of ten each. Set A includes U235T, U235F, U235HE, U238F, U238HE, Pu239T, Pu239F, Pu241T, U233T, Th232F. Set B includes U233F, U233HE, U236F, Pu239H, Pu240F, Pu241F, Pu242F, Th232H, Np237F, Cf252S. Set C includes U234F, U237F, Pu240H, U234HE, U236HE, Pu238F, Am241F, Am243F, Np238F, Cm242F. Set D includes Th227T, Th229T, Pa231F, Am241T, Am241H, Am242MT, Cm245T, Cf249T, Cf251T, Es254T. Set E includes Cf250S, Cm244S, Cm248S, Es253S, Fm254S, Fm255T, Fm256S, Np237H, U232T, U238S. Set F includes Cm243T, Cm246S, Cm243F, Cm244F, Cm246F, Cm248F, Pu242H, Np237T, Pu240T, and Pu242T to complete fission product yield evaluations for 60 fissioning systems in all. This report also serves as the primary documentation for the second evaluation of yields in ENDF/B-VI released in 1993.

  16. DoD Supply Chain Management Implementation Guide

    DTIC Science & Technology

    2000-12-01

    The DoD Supply Chain Management Implementation Guide is a tool to assist logistics personnel who are responsible for implementing supply chain management...This Guide presents the key supply chain principles and implementation strategies compiled into a structured and workable approach for achieving...progress toward fully incorporating supply chain management into the DoD logistics process. This document is Intended to serve as a roadmap for

  17. Compiling a Comprehensive EVA Training Dataset for NASA Astronauts

    NASA Technical Reports Server (NTRS)

    Laughlin, M. S.; Murray, J. D.; Lee, L. R.; Wear, M. L.; Van Baalen, M.

    2016-01-01

    Training for a spacewalk or extravehicular activity (EVA) is considered a hazardous duty for NASA astronauts. This places astronauts at risk for decompression sickness as well as various musculoskeletal disorders from working in the spacesuit. As a result, the operational and research communities over the years have requested access to EVA training data to supplement their studies. The purpose of this paper is to document the comprehensive EVA training data set that was compiled from multiple sources by the Lifetime Surveillance of Astronaut Health (LSAH) epidemiologists to investigate musculoskeletal injuries. The EVA training dataset does not contain any medical data, rather it only documents when EVA training was performed, by whom and other details about the session. The first activities practicing EVA maneuvers in water were performed at the Neutral Buoyancy Simulator (NBS) at the Marshall Spaceflight Center in Huntsville, Alabama. This facility opened in 1967 and was used for EVA training until the early Space Shuttle program days. Although several photographs show astronauts performing EVA training in the NBS, records detailing who performed the training and the frequency of training are unavailable. Paper training records were stored within the NBS after it was designated as a National Historic Landmark in 1985 and closed in 1997, but significant resources would be needed to identify and secure these records, and at this time LSAH has not pursued acquisition of these early training records. Training in the NBS decreased when the Johnson Space Center in Houston, Texas, opened the Weightless Environment Training Facility (WETF) in 1980. Early training records from the WETF consist of 11 hand-written dive logbooks compiled by individual workers that were digitized at the request of LSAH. The WETF was integral in the training for Space Shuttle EVAs until its closure in 1998. The Neutral Buoyancy Laboratory (NBL) at the Sonny Carter Training Facility near JSC

  18. MIRNA-DISTILLER: A Stand-Alone Application to Compile microRNA Data from Databases

    PubMed Central

    Rieger, Jessica K.; Bodan, Denis A.; Zanger, Ulrich M.

    2011-01-01

    MicroRNAs (miRNA) are small non-coding RNA molecules of ∼22 nucleotides which regulate large numbers of genes by binding to seed sequences at the 3′-untranslated region of target gene transcripts. The target mRNA is then usually degraded or translation is inhibited, although thus resulting in posttranscriptional down regulation of gene expression at the mRNA and/or protein level. Due to the bioinformatic difficulties in predicting functional miRNA binding sites, several publically available databases have been developed that predict miRNA binding sites based on different algorithms. The parallel use of different databases is currently indispensable, but highly uncomfortable and time consuming, especially when working with numerous genes of interest. We have therefore developed a new stand-alone program, termed MIRNA-DISTILLER, which allows to compile miRNA data for given target genes from public databases. Currently implemented are TargetScan, microCosm, and miRDB, which may be queried independently, pairwise, or together to calculate the respective intersections. Data are stored locally for application of further analysis tools including freely definable biological parameter filters, customized output-lists for both miRNAs and target genes, and various graphical facilities. The software, a data example file and a tutorial are freely available at http://www.ikp-stuttgart.de/content/language1/html/10415.asp PMID:22303335

  19. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  20. Regulatory and technical reports: (Abstract index journal). Compilation for first quarter 1997, January--March

    SciTech Connect

    Sheehan, M.A.

    1997-06-01

    This compilation consists of bibliographic data and abstracts for the formal regulatory and technical reports issued by the U.S. Nuclear Regulatory Commission (NRC) Staff and its contractors. This compilation is published quarterly and cummulated annually. Reports consist of staff-originated reports, NRC-sponsored conference reports, NRC contractor-prepared reports, and international agreement reports.