Science.gov

Sample records for implementation compilation optimization

  1. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  2. Asymptotically optimal topological quantum compiling.

    PubMed

    Kliuchnikov, Vadym; Bocharov, Alex; Svore, Krysta M

    2014-04-11

    We address the problem of compiling quantum operations into braid representations for non-Abelian quasiparticles described by the Fibonacci anyon model. We classify the single-qubit unitaries that can be represented exactly by Fibonacci anyon braids and use the classification to develop a probabilistically polynomial algorithm that approximates any given single-qubit unitary to a desired precision by an asymptotically depth-optimal braid pattern. We extend our algorithm in two directions: to produce braids that allow only single-strand movement, called weaves, and to produce depth-optimal approximations of two-qubit gates. Our compiled braid patterns have depths that are 20 to 1000 times shorter than those output by prior state-of-the-art methods, for precisions ranging between 10(-10) and 10(-30). PMID:24765934

  3. Optimizing compiler for lexically scoped LISP

    SciTech Connect

    Brooks, R.A.; Gabriel, R.P.; Steele, G.L. Jr.

    1982-06-01

    The authors are developing an optimizing compiler for a dialect of the LISP language. The current target architecture is the s-1, a multiprocessing supercomputer designed at Lawrence Livermore National Laboratory. While LISP is usually thought of as a language primarily for symbolic processing and list manipulation, this compiler is also intended to compete with the S-1 pascal and FORTRAN compilers for quality of compiled numerical code. The S-1 is designed for extremely high-speed signal processing as well as for symbolic computation; it provides primitive operations on vectors of floating-point and complex numbers. The LISP compiler is designed to exploit the architecture heavily. The compiler is structurally and conceptually similar to the BLISS-11 compiler and the compilers produced by PQCC. In particular, the tnbind technique has been borrowed and extended. The compiler is stable-driven to a great extent, more so than BLISS-11 but less so than a PQCC compiler. 58 references.

  4. A Language for Specifying Compiler Optimizations for Generic Software

    SciTech Connect

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allow the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.

  5. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  6. Optimizing Sisal Compiler; Sisal Compiler and Running System

    1992-11-12

    OSC is a compiler and runtime system for the functional language Sisal. Functional languages are based on mathematical principals, and may reduce the cost of parallel program development without sacrificing performance. OSC compiles Sisal source code to binary form, automatically inserting calls to the Sisal runtime system to manage parallel execution of independent tasks. Features include support for dynamic arrays, automatic vectorization, and automatic parallelization. At runtime, the user may specify the number of workers,more » the granularity of tasks, and other execution parameters.« less

  7. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; Son, Seung Woo

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  8. ALFred, a protocol compiler for the automated implementation of distributed application

    SciTech Connect

    Braun, T.; Chrisment, I.; Diot, C.; Gagnon, F.; Gautier, L.

    1996-12-31

    This paper describes the design and the prototyping of a compiling tool for the automated implementation of distributed applications: ALFred. This compiler starts from the formal specification of an application written in ESTEREL, and then integrates end-to-end communication functions tailored to the application characteristics (described in the specification); it finally produces a high performance implementation. The paper describes the communication architecture associated to our automated approach. The compiler is made of two main parts: a control compiler also called ALF compiler; and a data manipulation compiler (the ILP compiler) that combines data manipulation functions in an efficient way (the ILP loop). The ALFred compiler has been designed to allow the development and the analysis of non-layered high performance communication architectures based on ALF and ILP.

  9. Final Project Report: A Polyhedral Transformation Framework for Compiler Optimization

    SciTech Connect

    Sadayappan, Ponnuswamy; Rountev, Atanas

    2015-06-15

    The project developed the polyhedral compiler transformation module PolyOpt/Fortran in the ROSE compiler framework. PolyOpt/Fortran performs automated transformation of affine loop nests within FORTRAN programs for enhanced data locality and parallel execution. A FORTAN version of the Polybench library was also developed by the project. A third development was a dynamic analysis approach to gauge vectorization potential within loops of programs; software (DDVec) for automated instrumentation and dynamic analysis of programs was developed.

  10. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  11. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    SciTech Connect

    Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev

    2014-03-01

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.

  12. Optimization guide for programs compiled under IBM FORTRAN H (OPT=2)

    NASA Technical Reports Server (NTRS)

    Smith, D. M.; Dobyns, A. H.; Marsh, H. M.

    1977-01-01

    Guidelines are given to provide the programmer with various techniques for optimizing programs when the FORTRAN IV H compiler is used with OPT=2. Subroutines and programs are described in the appendices along with a timing summary of all the examples given in the manual.

  13. Optimizing python-based ROOT I/O with PyPy's tracing just-in-time compiler

    NASA Astrophysics Data System (ADS)

    Tlp Lavrijsen, Wim

    2012-12-01

    The Python programming language allows objects and classes to respond dynamically to the execution environment. Most of this, however, is made possible through language hooks which by definition can not be optimized and thus tend to be slow. The PyPy implementation of Python includes a tracing just in time compiler (JIT), which allows similar dynamic responses but at the interpreter-, rather than the application-level. Therefore, it is possible to fully remove the hooks, leaving only the dynamic response, in the optimization stage for hot loops, if the types of interest are opened up to the JIT. A general opening up of types to the JIT, based on reflection information, has already been developed (cppyy). The work described in this paper takes it one step further by customizing access to ROOT I/O to the JIT, allowing for fully automatic optimizations.

  14. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    SciTech Connect

    Choudhary, Alok; Kandemir, Mahmut

    2015-03-18

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions.

  15. Compiler optimizations as a countermeasure against side-channel analysis in MSP430-based devices.

    PubMed

    Malagón, Pedro; de Goyeneche, Juan-Mariano; Zapater, Marina; Moya, José M; Banković, Zorana

    2012-01-01

    Ambient Intelligence (AmI) requires devices everywhere, dynamic and massively distributed networks of low-cost nodes that, among other data, manage private information or control restricted operations. MSP430, a 16-bit microcontroller, is used in WSN platforms, as the TelosB. Physical access to devices cannot be restricted, so attackers consider them a target of their malicious attacks in order to obtain access to the network. Side-channel analysis (SCA) easily exploits leakages from the execution of encryption algorithms that are dependent on critical data to guess the key value. In this paper we present an evaluation framework that facilitates the analysis of the effects of compiler and backend optimizations on the resistance against statistical SCA. We propose an optimization-based software countermeasure that can be used in current low-cost devices to radically increase resistance against statistical SCA, analyzed with the new framework. PMID:22969383

  16. Compiler Optimizations as a Countermeasure against Side-Channel Analysis in MSP430-Based Devices

    PubMed Central

    Malagón, Pedro; de Goyeneche, Juan-Mariano; Zapater, Marina; Moya, José M.; Banković, Zorana

    2012-01-01

    Ambient Intelligence (AmI) requires devices everywhere, dynamic and massively distributed networks of low-cost nodes that, among other data, manage private information or control restricted operations. MSP430, a 16-bit microcontroller, is used in WSN platforms, as the TelosB. Physical access to devices cannot be restricted, so attackers consider them a target of their malicious attacks in order to obtain access to the network. Side-channel analysis (SCA) easily exploits leakages from the execution of encryption algorithms that are dependent on critical data to guess the key value. In this paper we present an evaluation framework that facilitates the analysis of the effects of compiler and backend optimizations on the resistance against statistical SCA. We propose an optimization-based software countermeasure that can be used in current low-cost devices to radically increase resistance against statistical SCA, analyzed with the new framework. PMID:22969383

  17. Compiler optimization technique for data cache prefetching using a small CAM array

    SciTech Connect

    Chi, C.H.

    1994-12-31

    With advances in compiler optimization and program flow analysis, software assisted cache prefetching schemes using PREFETCH instructions are now possible. Although data can be prefetched accurately into the cache, the runtime overhead associated with these schemes often limits their practical use. In this paper, we propose a new scheme, called the Strike-CAM Data Prefetching (SCP), to prefetch array references with constant strides accurately. Compared to current software assisted data prefetching schemes, the SCP scheme has much lower runtime overhead without sacrificing prefetching accuracy. Our result showed that the SCP scheme is particularly suitable for computing intensive scientific applications where cache misses are mainly due to array references with constant strides and they can be prefetched very accurately by this SCP scheme.

  18. OptQC v1.3: An (updated) optimized parallel quantum compiler

    NASA Astrophysics Data System (ADS)

    Loke, T.; Wang, J. B.

    2016-10-01

    We present a revised version of the OptQC program of Loke et al. (2014) [1]. We have removed the simulated annealing process in favour of a descending random walk. We have also introduced a new method for iteratively generating permutation matrices during the random walk process, providing a reduced total cost for implementing the quantum circuit. Lastly, we have also added a synchronization mechanism between threads, giving quicker convergence to more optimal solutions.

  19. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  20. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package. PMID:12086529

  1. Berkeley Unified Parallel C (UPC) Compiler

    2003-04-06

    This program is a portable, open-source, compiler for the UPC language, which is based on the Open64 framework, and has extensive support for optimizations. This compiler operated by translating UPC into ANS/ISO C for compilation by a native compiler and linking with a UPC Runtime Library. This design eases portability to both shared and distributed memory parallel architectures. For proper operation the "Berkeley Unified Parallel C (UPC) Runtime Library" and its dependencies are required. Compatiblemore » replacements which implement "The Berkeley UPC Runtime Specification" are possible.« less

  2. Read buffer optimizations to support compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.

    1993-01-01

    Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.

  3. Implementing the optimal provision of ecosystem services.

    PubMed

    Polasky, Stephen; Lewis, David J; Plantinga, Andrew J; Nelson, Erik

    2014-04-29

    Many ecosystem services are public goods whose provision depends on the spatial pattern of land use. The pattern of land use is often determined by the decisions of multiple private landowners. Increasing the provision of ecosystem services, though beneficial for society as a whole, may be costly to private landowners. A regulator interested in providing incentives to landowners for increased provision of ecosystem services often lacks complete information on landowners' costs. The combination of spatially dependent benefits and asymmetric cost information means that the optimal provision of ecosystem services cannot be achieved using standard regulatory or payment for ecosystem services approaches. Here we show that an auction that sets payments between landowners and the regulator for the increased value of ecosystem services with conservation provides incentives for landowners to truthfully reveal cost information, and allows the regulator to implement the optimal provision of ecosystem services, even in the case with spatially dependent benefits and asymmetric information.

  4. Implementing the optimal provision of ecosystem services.

    PubMed

    Polasky, Stephen; Lewis, David J; Plantinga, Andrew J; Nelson, Erik

    2014-04-29

    Many ecosystem services are public goods whose provision depends on the spatial pattern of land use. The pattern of land use is often determined by the decisions of multiple private landowners. Increasing the provision of ecosystem services, though beneficial for society as a whole, may be costly to private landowners. A regulator interested in providing incentives to landowners for increased provision of ecosystem services often lacks complete information on landowners' costs. The combination of spatially dependent benefits and asymmetric cost information means that the optimal provision of ecosystem services cannot be achieved using standard regulatory or payment for ecosystem services approaches. Here we show that an auction that sets payments between landowners and the regulator for the increased value of ecosystem services with conservation provides incentives for landowners to truthfully reveal cost information, and allows the regulator to implement the optimal provision of ecosystem services, even in the case with spatially dependent benefits and asymmetric information. PMID:24722635

  5. Implementing the optimal provision of ecosystem services

    PubMed Central

    Polasky, Stephen; Lewis, David J.; Plantinga, Andrew J.; Nelson, Erik

    2014-01-01

    Many ecosystem services are public goods whose provision depends on the spatial pattern of land use. The pattern of land use is often determined by the decisions of multiple private landowners. Increasing the provision of ecosystem services, though beneficial for society as a whole, may be costly to private landowners. A regulator interested in providing incentives to landowners for increased provision of ecosystem services often lacks complete information on landowners’ costs. The combination of spatially dependent benefits and asymmetric cost information means that the optimal provision of ecosystem services cannot be achieved using standard regulatory or payment for ecosystem services approaches. Here we show that an auction that sets payments between landowners and the regulator for the increased value of ecosystem services with conservation provides incentives for landowners to truthfully reveal cost information, and allows the regulator to implement the optimal provision of ecosystem services, even in the case with spatially dependent benefits and asymmetric information. PMID:24722635

  6. HOPE: Just-in-time Python compiler for astrophysical computations

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  7. Implementation of efficient sensitivity analysis for optimization of large structures

    NASA Technical Reports Server (NTRS)

    Umaretiya, J. R.; Kamil, H.

    1990-01-01

    The paper presents the theoretical bases and implementation techniques of sensitivity analyses for efficient structural optimization of large structures, based on finite element static and dynamic analysis methods. The sensitivity analyses have been implemented in conjunction with two methods for optimization, namely, the Mathematical Programming and Optimality Criteria methods. The paper discusses the implementation of the sensitivity analysis method into our in-house software package, AutoDesign.

  8. Feedback Implementation of Zermelo's Optimal Control by Sugeno Approximation

    NASA Technical Reports Server (NTRS)

    Clifton, C.; Homaifax, A.; Bikdash, M.

    1997-01-01

    This paper proposes an approach to implement optimal control laws of nonlinear systems in real time. Our methodology does not require solving two-point boundary value problems online and may not require it off-line either. The optimal control law is learned using the original Sugeno controller (OSC) from a family of optimal trajectories. We compare the trajectories generated by the OSC and the trajectories yielded by the optimal feedback control law when applied to Zermelo's ship steering problem.

  9. An Advanced Compiler Designed for a VLIW DSP for Sensors-Based Systems

    PubMed Central

    Yang, Xu; He, Hu

    2012-01-01

    The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors. PMID:22666040

  10. An advanced compiler designed for a VLIW DSP for sensors-based systems.

    PubMed

    Yang, Xu; He, Hu

    2012-01-01

    The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors. PMID:22666040

  11. Optimal Implementations for Reliable Circadian Clocks

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-09-01

    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  12. GENERAL: Linear Optical Scheme for Implementing Optimal Real State Cloning

    NASA Astrophysics Data System (ADS)

    Wan, Hong-Bo; Ye, Liu

    2010-06-01

    We propose an experimental scheme for implementing the optimal 1 → 3 real state cloning via linear optical elements. This method relies on one polarized qubit and two location qubits and is feasible with current experimental technology.

  13. Financing and funding health care: Optimal policy and political implementability.

    PubMed

    Nuscheler, Robert; Roeder, Kerstin

    2015-07-01

    Health care financing and funding are usually analyzed in isolation. This paper combines the corresponding strands of the literature and thereby advances our understanding of the important interaction between them. We investigate the impact of three modes of health care financing, namely, optimal income taxation, proportional income taxation, and insurance premiums, on optimal provider payment and on the political implementability of optimal policies under majority voting. Considering a standard multi-task agency framework we show that optimal health care policies will generally differ across financing regimes when the health authority has redistributive concerns. We show that health care financing also has a bearing on the political implementability of optimal health care policies. Our results demonstrate that an isolated analysis of (optimal) provider payment rests on very strong assumptions regarding both the financing of health care and the redistributive preferences of the health authority.

  14. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  15. Optimization education after project implementation: sharing "lessons learned" with staff.

    PubMed

    Vaughn, Susan

    2011-01-01

    Implementations involving healthcare technology solutions focus on providing end-user education prior to the application going "live" in the organization. Benefits to postimplementation education for staff should be included when planning these projects. This author describes the traditional training provided during the implementation of a bar-coding medication project and then the optimization training 8 weeks later.

  16. A Training Package for Implementing the IEP Process in Wyoming. Volume IV. Compilation of Successful Training Strategies.

    ERIC Educational Resources Information Center

    Foxworth-Mott, Anita; Moore, Caroline

    Volume IV of a four volume series offers strategies for implementing effective inservice workshops to train administrators, assessment personnel, and others involved in the development and implementation of individualized education programs (IEPs) for handicapped children in Wyoming. Part 1 addresses points often overlooked in delivering training,…

  17. Quantum Compiling for Topological Quantum Computing

    NASA Astrophysics Data System (ADS)

    Svore, Krysta

    2014-03-01

    In a topological quantum computer, universality is achieved by braiding and quantum information is natively protected from small local errors. We address the problem of compiling single-qubit quantum operations into braid representations for non-abelian quasiparticles described by the Fibonacci anyon model. We develop a probabilistically polynomial algorithm that outputs a braid pattern to approximate a given single-qubit unitary to a desired precision. We also classify the single-qubit unitaries that can be implemented exactly by a Fibonacci anyon braid pattern and present an efficient algorithm to produce their braid patterns. Our techniques produce braid patterns that meet the uniform asymptotic lower bound on the compiled circuit depth and thus are depth-optimal asymptotically. Our compiled circuits are significantly shorter than those output by prior state-of-the-art methods, resulting in improvements in depth by factors ranging from 20 to 1000 for precisions ranging between 10-10 and 10-30.

  18. Optimization of an optically implemented on-board FDMA demultiplexer

    NASA Technical Reports Server (NTRS)

    Fargnoli, J.; Riddle, L.

    1991-01-01

    Performance of a 30 GHz frequency division multiple access (FDMA) uplink to a processing satellite is modelled for the case where the onboard demultiplexer is implemented optically. Included in the performance model are the effects of adjacent channel interference, intersymbol interference, and spurious signals associated with the optical implementation. Demultiplexer parameters are optimized to provide the minimum bit error probability at a given bandwidth efficiency when filtered QPSK modulation is employed.

  19. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  20. The Specification of Source-to-source Transformations for the Compile-time Optimization of Parallel Object-oriented Scientific Applications

    SciTech Connect

    Quinlan, D; Kowarschik, M

    2001-06-05

    The performance of object-oriented applications in scientific computing often suffers from the inefficient use of high-level abstractions provided by underlying libraries. Since these library abstractions are not part of the programming language itself there is no compiler mechanism to respect their semantics and thus to perform appropriate optimizations, e.g., array semantics within object-oriented array class libraries which permit parallel optimizations inconceivable to the serial compiler. We have presented the ROSE infrastructure as a tool for automatically generating library-specific preprocessors. These preprocessors can perform sematics-based source-to-source transformations of the application in order to introduce high-level code optimizations. In this paper we outline the design of ROSE and focus on the discussion of various approaches for specifying and processing complex source code transformations. These techniques are supposed to be as easy and intuitive as possible for the ROSE users, i.e. for the designers of the library-specific preprocessors.

  1. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  2. All-Optical Implementation of the Ant Colony Optimization Algorithm.

    PubMed

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous "ant colony" problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  3. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  4. All-Optical Implementation of the Ant Colony Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-05-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems.

  5. HAL/S-FC compiler system specifications

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  6. Implementing size-optimal discrete neural networks require analog circuitry

    SciTech Connect

    Beiu, V.

    1998-12-01

    This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  7. Implementation of modal optimization system of Subaru-188 adaptive optics

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Golota, Taras; Olivier, Guyon; Dinkins, Matthew; Oya, Shin; Colley, Stephen; Eldred, Michael; Watanabe, Makoto; Itoh, Meguru; Saito, Yoshihiko; Hayano, Yutaka; Takami, Hideki; Iye, Masanori

    2006-06-01

    Subaru AO-188 is a curvature adaptive optics system with 188 elements. It has been developed by NAOJ (National Astronomical Observatory of Japan) in recent years, as the upgrade from the existing 36-element AO system currently in operation at Subaru telescope. In this upgrade, the control scheme is also changed from zonal control to modal control. This paper presents development and implementation of the modal optimization system for this new AO-188. Also, we will introduce some special features and attempt in our implementation, such as consideration of resonance of deformable mirror at the lower order modes, and extension of the scheme for the optimization of the magnitude of membrane mirror in wave front sensor. Those are simple but shall be useful enhancement for the better performance to the conservative configuration with conventional modal control, and possibly useful in other extended operation modes or control schemes recently in research and development as well.

  8. Implementation of generalized optimality criteria in a multidisciplinary environment

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.; Venkayya, V. B.

    1989-01-01

    A generalized optimality criterion method consisting of a dual problem solver combined with a compound scaling algorithm was implemented in the multidisciplinary design tool, ASTROS. This method enables, for the first time in a production design tool, the determination of a minimum weight design using thousands of independent structural design variables while simultaneously considering constraints on response quantities in several disciplines. Even for moderately large examples, the computational efficiency is improved significantly relative to the conventional approach.

  9. Trajectory optimization on multiprocessors - A comparison of three implementation strategies

    NASA Astrophysics Data System (ADS)

    Summerset, Twain K.; Chowkwanyun, Raymond M.

    The optimization of atmospheric flight vehicle trajectories can require the simulation of several thousand individual trajectories. Such a task can be extremely time consuming if simulating each trajectory requires numerically integrating a set of nonlinear differential equations. This traditional approach, which may require many hours' worth of analysis on a time-shared computer facility, is a bottleneck in space mission planning and limits the number of trajectory design options a mission planner can evaluate. To achieve marked reductions in trajectory design solution times, parallel optimization techniques are proposed. In this paper, three strategies for implementing trajectory optimization methods on multiprocessors will be compared. The comparisons will be illustrated through four trajectory design examples. In the first two examples, maximum reentry downrange and crossrange optimal control problems are posed for a generic maneuvering aerodynamic space vehicle. The third example is Troesch's problem, while the fourth example is the classic Brachistochrone problem. Each of the examples are posed as two-point boundary value problems whose solutions can be expressed as the solutions to a set of nonlinear equations.

  10. Implementation of optimal phase-covariant cloning machines

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2007-07-15

    The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarization encoded qubits are discussed.

  11. Optimized evaporation technique for leachate treatment: Small scale implementation.

    PubMed

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature.

  12. Designing a stencil compiler for the Connection Machine model CM-5

    SciTech Connect

    Brickner, R.G.; Holian, K.; Thiagarajan, B.; Johnsson, S.L. |

    1994-12-31

    In this paper the authors present the design of a stencil compiler for the Connection Machine system CM-5. The stencil compiler will optimize the data motion between processing nodes, minimize the data motion within a node, and minimize the data motion between registers and local memory in a node. The compiler will natively support two-dimensional stencils, but stencils in three dimensions will be automatically decomposed. Lower dimensional stencils are treated as degenerate stencils. The compiler will be integrated as part of the CM Fortran programming system. Much of the compiler code will be adapted from the CM-2/200 stencil compiler, which is part of CMSSL (the Connection Machine Scientific Software Library) Release 3.1 for the CM-2/200, and the compiler will be available as part of the Connection Machine Scientific Software Library (CMSSL) for the CM-5. In addition to setting down design considerations, they report on the implementation status of the stencil compiler. In particular, they discuss optimization strategies and status of code conversion from CM-2/200 to CM-5 architecture, and report on the measured performance of prototype target code which the compiler will generate.

  13. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  14. NONMEM version III implementation on a VAX 9000: a DCL procedure for single-step execution and the unrealized advantage of a vectorizing FORTRAN compiler.

    PubMed

    Vielhaber, J P; Kuhlman, J V; Barrett, J S

    1993-06-01

    There is great interest within the FDA, academia, and the pharmaceutical industry to provide more detailed information about the time course of drug concentration and effect in subjects receiving a drug as part of their overall therapy. Advocates of this effort expect the eventual goal of these endeavors to provide labeling which reflects the experience of drug administration to the entire population of potential recipients. The set of techniques which have been thus far applied to this task has been defined as population approach methodologies. While a consensus view on the usefulness of these techniques is not likely to be formed in the near future, most pharmaceutical companies or individuals who provide kinetic/dynamic support for drug development programs are investigating population approach methods. A major setback in this investigation has been the shortage of computational tools to analyze population data. One such algorithm, NONMEM, supplied by the NONMEM Project Group of the University of California, San Francisco has been widely used and remains the most accessible computational tool to date. The program is distributed to users as FORTRAN 77 source code with instructions for platform customization. Given the memory and compiler requirements of this algorithm and the intensive matrix manipulation required for run convergence and parameter estimation, this program's performance is largely determined by the platform and the FORTRAN compiler used to create the NONMEM executable. Benchmark testing on a VAX 9000 with Digital's FORTRAN (v. 1.2) compiler suggests that this is an acceptable platform. Due to excessive branching within the loops of the NONMEM source code, the vector processing capabilities of the KV900-AA vector processor actually decrease performance. A DCL procedure is given to provide single step execution of this algorithm.

  15. Process compilation methods for thin film devices

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammed Hasanuz

    This doctoral thesis presents the development of a systematic method of automatic generation of fabrication processes (or process flows) for thin film devices starting from schematics of the device structures. This new top-down design methodology combines formal mathematical flow construction methods with a set of library-specific available resources to generate flows compatible with a particular laboratory. Because this methodology combines laboratory resource libraries with a logical description of thin film device structure and generates a set of sequential fabrication processing instructions, this procedure is referred to as process compilation, in analogy to the procedure used for compilation of computer programs. Basically, the method developed uses a partially ordered set (poset) representation of the final device structure which describes the order between its various components expressed in the form of a directed graph. Each of these components are essentially fabricated "one at a time" in a sequential fashion. If the directed graph is acyclic, the sequence in which these components are fabricated is determined from the poset linear extensions, and the component sequence is finally expanded into the corresponding process flow. This graph-theoretic process flow construction method is powerful enough to formally prove the existence and multiplicity of flows thus creating a design space {cal D} suitable for optimization. The cardinality Vert{cal D}Vert for a device with N components can be large with a worst case Vert{cal D}Vert≤(N-1)! yielding in general a combinatorial explosion of solutions. The number of solutions is hence controlled through a-priori estimates of Vert{cal D}Vert and condensation (i.e., reduction) of the device component graph. The mathematical method has been implemented in a set of algorithms that are parts of the software tool MISTIC (Michigan Synthesis Tools for Integrated Circuits). MISTIC is a planar process compiler that generates

  16. Compiler-assisted static checkpoint insertion

    NASA Technical Reports Server (NTRS)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1992-01-01

    This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.

  17. Testing-Based Compiler Validation for Synchronous Languages

    NASA Technical Reports Server (NTRS)

    Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier

    2014-01-01

    In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.

  18. Implementation and optimization of portable standard LISP for the CRAY

    SciTech Connect

    Anderson, J.W.; Kessler, R.R.; Galway, W.F.

    1986-01-01

    Portable Standard LISP (PSL), a dialect of LISP developed at the University of Utah, has been implemented on the CRAY-1s and CRAY X-MPs at the Los Alamos National Laboratory and at the National Magnetic Fusion Energy Computer Center at Lawrence Livermore National Laboratory. This implementation was developed using a highly portable model and then tuned for the Cray architecture. The speed of the resulting system is quite impressive, and the environment is very good for symbolic processing. 5 refs.

  19. Implementation and optimization of portable standard LISP for the Cray

    SciTech Connect

    Anderson, J.W.; Kessler, R.R.; Galway, W.F.

    1987-01-01

    Portable Standard LISP (PSL), a dialect of LISP developed at the University of Utah, has been implemented on the CRAY-1s and CRAY X-MPs at the Los Alamos National Laboratory and at the National Magnetic Fusion Energy Computer Center at Lawrence Livermore National Laboratory. This implementation was developed using a highly portable model and then tuned for the Cray architecture. The speed of the resulting system is quite impressive, and the environment is very good for symbolic processing. 5 refs., 6 tabs.

  20. Spacelab user implementation assessment study. Volume 2: Concept optimization

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The integration and checkout activities of Spacelab payloads consist of two major sets of tasks: support functions, and test and operations. The support functions are definitized and the optimized approach for the accomplishment of these functions are delineated. Comparable data are presented for test and operations activities.

  1. Optimal q-Markov COVER for finite precision implementation

    NASA Technical Reports Server (NTRS)

    Williamson, Darrell; Skelton, Robert E.

    1989-01-01

    The existing q-Markov COVER realization theory does not take into account the problems of arithmetic errors due to both the quantization of states and coefficients of the reduced order model. All q-Markov COVERs allow some freedom in the choice of parameters. Here, researchers exploit this freedom in the existing theory to optimize the models with respect to these finite wordlength effects.

  2. Livermore Compiler Analysis Loop Suite

    SciTech Connect

    Hornung, R. D.

    2013-03-01

    LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermore Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.

  3. Implementing quantum gates by optimal control with doubly exponential convergence.

    PubMed

    de Fouquieres, Pierre

    2012-03-16

    We introduce a novel algorithm for the task of coherently controlling a quantum mechanical system to implement any chosen unitary dynamics. It performs faster than existing state of the art methods by 1 to 3 orders of magnitude (depending on which one we compare to), particularly for quantum information processing purposes. This substantially enhances the ability to both study the control capabilities of physical systems within their coherence times, and constrain solutions for control tasks to lie within experimentally feasible regions. Natural extensions of the algorithm are also discussed. PMID:22540447

  4. Optimization and implementation of the smart joint actuator

    NASA Astrophysics Data System (ADS)

    Manzo, Justin; Garcia, Ephrahim

    2008-03-01

    A new actuator system is being developed at the Cornell Laboratory of Intelligent Material Systems to address the problems of dynamic self-actuated shape change. This low profile actuator, known as the 'smart joint', is capable of maintaining rigidity in its nominal configuration, but can be actively strained to induce rotation at flexure joints. The joint is energetically efficient, only requiring power consumption during active morphing maneuvers used to move between shapes. The composite beam mechanism uses shape memory alloy (SMA) for strain actuation, with shape memory polymer (SMP) providing actively tailored rigidity due to its thermally varying properties. The first phase of the actuator development was modeling of the generic composite structure, proving analytically and computationally that the joint can produce useful work. The next phase focuses on optimization of this joint structure and usage, including ideal layering configurations and thicknesses in order to maximize various metrics specific to particular applications. Heuristic optimization using the simulated annealing algorithm is employed to best determine the structure of the joint at various scaling ratios, layering structures, and with varying external loading taken into account. The results are briefly compared to finite element models.

  5. A controller based on Optimal Type-2 Fuzzy Logic: systematic design, optimization and real-time implementation.

    PubMed

    Fayek, H M; Elamvazuthi, I; Perumal, N; Venkatesh, B

    2014-09-01

    A computationally-efficient systematic procedure to design an Optimal Type-2 Fuzzy Logic Controller (OT2FLC) is proposed. The main scheme is to optimize the gains of the controller using Particle Swarm Optimization (PSO), then optimize only two parameters per type-2 membership function using Genetic Algorithm (GA). The proposed OT2FLC was implemented in real-time to control the position of a DC servomotor, which is part of a robotic arm. The performance judgments were carried out based on the Integral Absolute Error (IAE), as well as the computational cost. Various type-2 defuzzification methods were investigated in real-time. A comparative analysis with an Optimal Type-1 Fuzzy Logic Controller (OT1FLC) and a PI controller, demonstrated OT2FLC׳s superiority; which is evident in handling uncertainty and imprecision induced in the system by means of noise and disturbances.

  6. Economic Implementation and Optimization of Secondary Oil Recovery

    SciTech Connect

    Cary D. Brock

    2006-01-09

    The St Mary West Barker Sand Unit (SMWBSU or Unit) located in Lafayette County, Arkansas was unitized for secondary recovery operations in 2002 followed by installation of a pilot injection system in the fall of 2003. A second downdip water injection well was added to the pilot project in 2005 and 450,000 barrels of saltwater has been injected into the reservoir sand to date. Daily injection rates have been improved over initial volumes by hydraulic fracture stimulation of the reservoir sand in the injection wells. Modifications to the injection facilities are currently being designed to increase water injection rates for the pilot flood. A fracture treatment on one of the production wells resulted in a seven-fold increase of oil production. Recent water production and increased oil production in a producer closest to the pilot project indicates possible response to the water injection. The reservoir and wellbore injection performance data obtained during the pilot project will be important to the secondary recovery optimization study for which the DOE grant was awarded. The reservoir characterization portion of the modeling and simulation study is in progress by Strand Energy project staff under the guidance of University of Houston Department of Geosciences professor Dr. Janok Bhattacharya and University of Texas at Austin Department of Petroleum and Geosystems Engineering professor Dr. Larry W. Lake. A geologic and petrophysical model of the reservoir is being constructed from geophysical data acquired from core, well log and production performance histories. Possible use of an outcrop analog to aid in three dimensional, geostatistical distribution of the flow unit model developed from the wellbore data will be investigated. The reservoir model will be used for full-field history matching and subsequent fluid flow simulation based on various injection schemes including patterned water flooding, addition of alkaline surfactant-polymer (ASP) to the injected water

  7. An Extensible Open-Source Compiler Infrastructure for Testing

    SciTech Connect

    Quinlan, D; Ur, S; Vuduc, R

    2005-12-09

    Testing forms a critical part of the development process for large-scale software, and there is growing need for automated tools that can read, represent, analyze, and transform the application's source code to help carry out testing tasks. However, the support required to compile applications written in common general purpose languages is generally inaccessible to the testing research community. In this paper, we report on an extensible, open-source compiler infrastructure called ROSE, which is currently in development at Lawrence Livermore National Laboratory. ROSE specifically targets developers who wish to build source-based tools that implement customized analyses and optimizations for large-scale C, C++, and Fortran90 scientific computing applications (on the order of a million lines of code or more). However, much of this infrastructure can also be used to address problems in testing, and ROSE is by design broadly accessible to those without a formal compiler background. This paper details the interactions between testing of applications and the ways in which compiler technology can aid in the understanding of those applications. We emphasize the particular aspects of ROSE, such as support for the general analysis of whole programs, that are particularly well-suited to the testing research community and the scale of the problems that community solves.

  8. Array-Pattern-Match Compiler for Opportunistic Data Analysis

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A computer program has been written to facilitate real-time sifting of scientific data as they are acquired to find data patterns deemed to warrant further analysis. The patterns in question are of a type denoted array patterns, which are specified by nested parenthetical expressions. [One example of an array pattern is ((>3) 0 (not=1)): this pattern matches a vector of at least three elements, the first of which exceeds 3, the second of which is 0, and the third of which does not equal 1.] This program accepts a high-level description of a static array pattern and compiles a highly optimal and compact other program to determine whether any given instance of any data array matches that pattern. The compiler implemented by this program is independent of the target language, so that as new languages are used to write code that processes scientific data, they can easily be adapted to this compiler. This program runs on a variety of different computing platforms. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware.

  9. Mechanical systems: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation of several mechanized systems is presented. The articles are contained in three sections: robotics, industrial mechanical systems, including several on linear and rotary systems and lastly mechanical control systems, such as brakes and clutches.

  10. Evaluation of a multicore-optimized implementation for tomographic reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  11. Development and implementation of a rail current optimization program

    SciTech Connect

    King, T.L.; Dharamshi, R.; Kim, K.; Zhang, J.; Tompkins, M.W.; Anderson, M.A.; Feng, Q.

    1997-01-01

    Efforts are underway to automate the operation of a railgun hydrogen pellet injector for fusion reactor refueling. A plasma armature is employed to avoid the friction produced by a sliding metal armature and, in particular, to prevent high-Z impurities from entering the tokamak. High currents are used to achieve high accelerations, resulting in high plasma temperatures. Consequently, the plasma armature ablates and accumulates material from the pellet and gun barrel. This increases inertial and viscous drag, lowering acceleration. A railgun model has been developed to compute the acceleration in the presence of these losses. In order to quantify these losses, the ablation coefficient, {alpha}, and drag coefficient, C{sub d}, must be determined. These coefficients are estimated based on the pellet acceleration. The sensitivity of acceleration to {alpha} and C{sub d} has been calculated using the model. Once {alpha} and C{sub d} have been determined, their values are applied to the model to compute the appropriate current pulse width. An optimization program was written in LabVIEW software to carry out this procedure. This program was then integrated into the existing code used to operate the railgun system. Preliminary results obtained after test firing the gun indicate that the program computes reasonable values for {alpha} and C{sub d} and calculates realistic pulse widths.

  12. Livermore Compiler Analysis Loop Suite

    2013-03-01

    LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizationsmore » and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermore Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less

  13. HAL/S-FC and HAL/S-360 compiler system program description

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.

  14. An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments

    NASA Astrophysics Data System (ADS)

    Kang, Yuncheol; Prabhu, Vittal

    The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.

  15. Selected photographic techniques, a compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A selection has been made of methods, devices, and techniques developed in the field of photography during implementation of space and nuclear research projects. These items include many adaptations, variations, and modifications to standard hardware and practice, and should prove interesting to both amateur and professional photographers and photographic technicians. This compilation is divided into two sections. The first section presents techniques and devices that have been found useful in making photolab work simpler, more productive, and higher in quality. Section two deals with modifications to and special applications for existing photographic equipment.

  16. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  17. Optical implementations of the optimal phase-covariant quantum cloning machine

    SciTech Connect

    Fiurasek, Jaromir

    2003-05-01

    We propose two simple implementations of the optimal symmetric 1{yields}2 phase-covariant cloning machine for qubits. The first scheme is designed for qubits encoded into polarization states of photons and it involves a mixing of two photons on an unbalanced beam splitter. This scheme is probabilistic and the cloning succeeds with the probability 1/3. In the second setup, the qubits are represented by the states of Rydberg atoms and the cloning is accomplished by the resonant interaction of the atoms with a microwave field confined in a high-Q cavity. This latter approach allows for deterministic implementation of the optimal cloning transformation.

  18. Teleportation scheme implementing the universal optimal quantum cloning machine and the universal NOT gate.

    PubMed

    Ricci, M; Sciarrino, F; Sias, C; De Martini, F

    2004-01-30

    By a significant modification of the standard protocol of quantum state teleportation, two processes "forbidden" by quantum mechanics in their exact form, the universal NOT gate and the universal optimal quantum cloning machine, have been implemented contextually and optimally by a fully linear method. In particular, the first experimental demonstration of the tele-UNOT gate, a novel quantum information protocol, has been reported. The experimental results are found in full agreement with theory.

  19. An implementation of particle swarm optimization to evaluate optimal under-voltage load shedding in competitive electricity markets

    NASA Astrophysics Data System (ADS)

    Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.

    2013-11-01

    Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.

  20. Power-Aware Compiler Controllable Chip Multiprocessor

    NASA Astrophysics Data System (ADS)

    Shikano, Hiroaki; Shirako, Jun; Wada, Yasutaka; Kimura, Keiji; Kasahara, Hironori

    A power-aware compiler controllable chip multiprocessor (CMP) is presented and its performance and power consumption are evaluated with the optimally scheduled advanced multiprocessor (OSCAR) parallelizing compiler. The CMP is equipped with power control registers that change clock frequency and power supply voltage to functional units including processor cores, memories, and an interconnection network. The OSCAR compiler carries out coarse-grain task parallelization of programs and reduces power consumption using architectural power control support and the compiler's power saving scheme. The performance evaluation shows that MPEG-2 encoding on the proposed CMP with four CPUs results in 82.6% power reduction in real-time execution mode with a deadline constraint on its sequential execution time. Furthermore, MP3 encoding on a heterogeneous CMP with four CPUs and four accelerators results in 53.9% power reduction at 21.1-fold speed-up in performance against its sequential execution in the fastest execution mode.

  1. Optimization of Optical Systems Using Genetic Algorithms: a Comparison Among Different Implementations of The Algorithm

    NASA Astrophysics Data System (ADS)

    López-Medina, Mario E.; Vázquez-Montiel, Sergio; Herrera-Vázquez, Joel

    2008-04-01

    The Genetic Algorithms, GAs, are a method of global optimization that we use in the stage of optimization in the design of optical systems. In the case of optical design and optimization, the efficiency and convergence speed of GAs are related with merit function, crossover operator, and mutation operator. In this study we present a comparison between several genetic algorithms implementations using different optical systems, like achromatic cemented doublet, air spaced doublet and telescopes. We do the comparison varying the type of design parameters and the number of parameters to be optimized. We also implement the GAs using discreet parameters with binary chains and with continuous parameter using real numbers in the chromosome; analyzing the differences in the time taken to find the solution and the precision in the results between discreet and continuous parameters. Additionally, we use different merit function to optimize the same optical system. We present the obtained results in tables, graphics and a detailed example; and of the comparison we conclude which is the best way to implement GAs for design and optimization optical system. The programs developed for this work were made using the C programming language and OSLO for the simulation of the optical systems.

  2. Metallurgical processing: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The items in this compilation, all relating to metallurgical processing, are presented in two sections. The first section includes processes which are general in scope and applicable to a variety of metals or alloys. The second describes the processes that concern specific metals and their alloys.

  3. Compilation of non-contemporaneous constraints

    SciTech Connect

    Wray, R.E. III; Laird, J.E.; Jones, R.M.

    1996-12-31

    Hierarchical execution of domain knowledge is a useful approach for intelligent, real-time systems in complex domains. In addition, well-known techniques for knowledge compilation allow the reorganization of knowledge hierarchies into more efficient forms. However, these techniques have been developed in the context of systems that work in static domains. Our investigations indicate that it is not straightforward to apply knowledge compilation methods for hierarchical knowledge to systems that generate behavior in dynamic environments. One particular problem involves the compilation of non-contemporaneous constraints. This problem arises when a training instance dynamically changes during execution. After defining the problem, we analyze several theoretical approaches that address non-contemporaneous constraints. We have implemented the most promising of these alternatives within Soar, a software architecture for performance and learning. Our results demonstrate that the proposed solutions eliminate the problem in some situations and suggest that knowledge compilation methods are appropriate for interactive environments.

  4. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  5. Optimal environmental management strategy and implementation for groundwater contamination prevention and restoration.

    PubMed

    Wang, Mingyu

    2006-04-01

    An innovative management strategy is proposed for optimized and integrated environmental management for regional or national groundwater contamination prevention and restoration allied with consideration of sustainable development. This management strategy accounts for availability of limited resources, human health and ecological risks from groundwater contamination, costs for groundwater protection measures, beneficial uses and values from groundwater protection, and sustainable development. Six different categories of costs are identified with regard to groundwater prevention and restoration. In addition, different environmental impacts from groundwater contamination including human health and ecological risks are individually taken into account. System optimization principles are implemented to accomplish decision-makings on the optimal resources allocations of the available resources or budgets to different existing contaminated sites and projected contamination sites for a maximal risk reduction. Established management constraints such as budget limitations under different categories of costs are satisfied at the optimal solution. A stepwise optimization process is proposed in which the first step is to select optimally a limited number of sites where remediation or prevention measures will be taken, from all the existing contaminated and projected contamination sites, based on a total regionally or nationally available budget in a certain time frame such as 10 years. Then, several optimization steps determined year-by-year optimal distributions of the available yearly budgets for those selected sites. A hypothetical case study is presented to demonstrate a practical implementation of the management strategy. Several issues pertaining to groundwater contamination exposure and risk assessments and remediation cost evaluations are briefly discussed for adequately understanding implementations of the management strategy.

  6. An optimized implementation of a fault-tolerant clock synchronization circuit

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    1995-01-01

    A fault-tolerant clock synchronization circuit was designed and tested. A comparison to a previous design and the procedure followed to achieve the current optimization are included. The report also includes a description of the system and the results of tests performed to study the synchronization and fault-tolerant characteristics of the implementation.

  7. An optimized ultrasound digital beamformer with dynamic focusing implemented on FPGA.

    PubMed

    Almekkawy, Mohamed; Xu, Jingwei; Chirala, Mohan

    2014-01-01

    We present a resource-optimized dynamic digital beamformer for an ultrasound system based on a field-programmable gate array (FPGA). A comprehensive 64-channel receive beamformer with full dynamic focusing is embedded in the Altera Arria V FPGA chip. To improve spatial and contrast resolution, full dynamic beamforming is implemented by a novel method with resource optimization. This was conceived using the implementation of the delay summation through a bulk (coarse) delay and fractional (fine) delay. The sampling frequency is 40 MHz and the beamformer includes a 240 MHz polyphase filter that enhances the temporal resolution of the system while relaxing the Analog-to-Digital converter (ADC) bandwidth requirement. The results indicate that our 64-channel dynamic beamformer architecture is amenable for a low power FPGA-based implementation in a portable ultrasound system.

  8. Implementing nonprojective measurements via linear optics: An approach based on optimal quantum-state discrimination

    SciTech Connect

    Loock, Peter van; Nemoto, Kae; Munro, William J.; Raynal, Philippe; Luetkenhaus, Norbert

    2006-06-15

    We discuss the problem of implementing generalized measurements [positive operator-valued measures (POVMs)] with linear optics, either based upon a static linear array or including conditional dynamics. In our approach, a given POVM shall be identified as a solution to an optimization problem for a chosen cost function. We formulate a general principle: the implementation is only possible if a linear-optics circuit exists for which the quantum mechanical optimum (minimum) is still attainable after dephasing the corresponding quantum states. The general principle enables us, for instance, to derive a set of necessary conditions for the linear-optics implementation of the POVM that realizes the quantum mechanically optimal unambiguous discrimination of two pure nonorthogonal states. This extends our previous results on projection measurements and the exact discrimination of orthogonal states.

  9. A novel implementation of method of optimality criterion in synthesizing spacecraft structures with natural frequency constraints

    NASA Technical Reports Server (NTRS)

    Wang, Bo Ping; Chu, F. H.

    1989-01-01

    In the design of spacecraft structures, fine tuning the structure to achieve minimum weight with natural frequency constraints is a time consuming process. Here, a novel implementation of the method of optimality criterion (OC) is developed. In this new implementation of OC, the free vibration analysis results are used to compute the eigenvalue sensitivity data required for the formulation. Specifically, the modal elemental strain and kinetic energies are used. Additionally, normalized design parameters are introduced as a second level linking that allows design variables of different values to be linked together. With the use of this novel formulation, synthesis of structures with natural frequency constraint can be carried out manually using modal analysis results. Design examples are presented to illustrate this novel implementation of the optimality criterion method.

  10. Metallurgy: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A compilation on the technical uses of various metallurgical processes is presented. Descriptions are given of the mechanical properties of various alloys, ranging from TAZ-813 at 2200 F to investment cast alloy 718 at -320 F. Methods are also described for analyzing some of the constituents of various alloys from optical properties of carbide precipitates in Rene 41 to X-ray spectrographic analysis of the manganese content of high chromium steels.

  11. The Study of Cross-layer Optimization for Wireless Rechargeable Sensor Networks Implemented in Coal Mines.

    PubMed

    Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting

    2016-01-28

    Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes' placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper.

  12. The Study of Cross-layer Optimization for Wireless Rechargeable Sensor Networks Implemented in Coal Mines

    PubMed Central

    Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting

    2016-01-01

    Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes’ placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper. PMID:26828500

  13. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  14. Optimizing local protocols for implementing bipartite nonlocal unitary gates using prior entanglement and classical communication

    SciTech Connect

    Cohen, Scott M.

    2010-06-15

    We present a method of optimizing recently designed protocols for implementing an arbitrary nonlocal unitary gate acting on a bipartite system. These protocols use only local operations and classical communication with the assistance of entanglement, and they are deterministic while also being 'one-shot', in that they use only one copy of an entangled resource state. The optimization minimizes the amount of entanglement needed, and also the amount of classical communication, and it is often the case that less of each of these resources is needed than with an alternative protocol using two-way teleportation.

  15. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  16. HAL/S-360 compiler test activity report

    NASA Technical Reports Server (NTRS)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  17. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation.

  18. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    Jacob, J. Augustin; Kumar, N. Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  19. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  20. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  1. Implementation of a multiblock sensitivity analysis method in numerical aerodynamic shape optimization

    NASA Technical Reports Server (NTRS)

    Lacasse, James M.

    1995-01-01

    A multiblock sensitivity analysis method is applied in a numerical aerodynamic shape optimization technique. The Sensitivity Analysis Domain Decomposition (SADD) scheme which is implemented in this study was developed to reduce the computer memory requirements resulting from the aerodynamic sensitivity analysis equations. Discrete sensitivity analysis offers the ability to compute quasi-analytical derivatives in a more efficient manner than traditional finite-difference methods, which tend to be computationally expensive and prone to inaccuracies. The direct optimization procedure couples CFD analysis based on the two-dimensional thin-layer Navier-Stokes equations with a gradient-based numerical optimization technique. The linking mechanism is the sensitivity equation derived from the CFD discretized flow equations, recast in adjoint form, and solved using direct matrix inversion techniques. This investigation is performed to demonstrate an aerodynamic shape optimization technique on a multiblock domain and its applicability to complex geometries. The objectives are accomplished by shape optimizing two aerodynamic configurations. First, the shape optimization of a transonic airfoil is performed to investigate the behavior of the method in highly nonlinear flows and the effect of different grid blocking strategies on the procedure. Secondly, shape optimization of a two-element configuration in subsonic flow is completed. Cases are presented for this configuration to demonstrate the effect of simultaneously reshaping interfering elements. The aerodynamic shape optimization is shown to produce supercritical type airfoils in the transonic flow from an initially symmetric airfoil. Multiblocking effects the path of optimization while providing similar results at the conclusion. Simultaneous reshaping of elements is shown to be more effective than individual element reshaping due to the inclusion of mutual interference effects.

  2. Atomic mass compilation 2012

    SciTech Connect

    Pfeiffer, B.; Venkataramaniah, K.; Czok, U.; Scheidenberger, C.

    2014-03-15

    Atomic mass reflects the total binding energy of all nucleons in an atomic nucleus. Compilations and evaluations of atomic masses and derived quantities, such as neutron or proton separation energies, are indispensable tools for research and applications. In the last decade, the field has evolved rapidly after the advent of new production and measuring techniques for stable and unstable nuclei resulting in substantial ameliorations concerning the body of data and their precision. Here, we present a compilation of atomic masses comprising the data from the evaluation of 2003 as well as the results of new measurements performed. The relevant literature in refereed journals and reports as far as available, was scanned for the period beginning 2003 up to and including April 2012. Overall, 5750 new data points have been collected. Recommended values for the relative atomic masses have been derived and a comparison with the 2003 Atomic Mass Evaluation has been performed. This work has been carried out in collaboration with and as a contribution to the European Nuclear Structure and Decay Data Network of Evaluations.

  3. Proof-Carrying Code with Correct Compilers

    NASA Technical Reports Server (NTRS)

    Appel, Andrew W.

    2009-01-01

    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  4. Optimization of FIR Digital Filters Using a Real Parameter Parallel Genetic Algorithm and Implementations.

    NASA Astrophysics Data System (ADS)

    Xu, Dexiang

    This dissertation presents a novel method of designing finite word length Finite Impulse Response (FIR) digital filters using a Real Parameter Parallel Genetic Algorithm (RPPGA). This algorithm is derived from basic Genetic Algorithms which are inspired by natural genetics principles. Both experimental results and theoretical studies in this work reveal that the RPPGA is a suitable method for determining the optimal or near optimal discrete coefficients of finite word length FIR digital filters. Performance of RPPGA is evaluated by comparing specifications of filters designed by other methods with filters designed by RPPGA. The parallel and spatial structures of the algorithm result in faster and more robust optimization than basic genetic algorithms. A filter designed by RPPGA is implemented in hardware to attenuate high frequency noise in a data acquisition system for collecting seismic signals. These studies may lead to more applications of the Real Parameter Parallel Genetic Algorithms in Electrical Engineering.

  5. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    SciTech Connect

    Tian, Zhen E-mail: Xun.Jia@UTSouthwestern.edu Folkerts, Michael; Tan, Jun; Jia, Xun E-mail: Xun.Jia@UTSouthwestern.edu Jiang, Steve B. E-mail: Xun.Jia@UTSouthwestern.edu; Peng, Fei

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  6. Parallel incremental compilation. Doctoral thesis

    SciTech Connect

    Gafter, N.M.

    1990-06-01

    The time it takes to compile a large program has been a bottleneck in the software development process. When an interactive programming environment with an incremental compiler is used, compilation speed becomes even more important, but existing incremental compilers are very slow for some types of program changes. We describe a set of techniques that enable incremental compilation to exploit fine-grained concurrency in a shared-memory multi-processor and achieve asymptotic improvement over sequential algorithms. Because parallel non-incremental compilation is a special case of parallel incremental compilation, the design of a parallel compiler is a corollary of our result. Instead of running the individual phases concurrently, our design specifies compiler phases that are mutually sequential. However, each phase is designed to exploit fine-grained parallelism. By allowing each phase to present its output as a complete structure rather than as a stream of data, we can apply techniques such as parallel prefix and parallel divide-and-conquer, and we can construct applicative data structures to achieve sublinear execution time. Parallel algorithms for each phase of a compiler are presented to demonstrate that a complete incremental compiler can achieve execution time that is asymptotically less than sequential algorithms.

  7. Galileo Outreach Compilation

    NASA Astrophysics Data System (ADS)

    1998-09-01

    This NASA JPL (Jet Propulsion Laboratory) video production is a compilation of the best short movies and computer simulation/animations of the Galileo spacecraft's journey to Jupiter. A limited number of actual shots are presented of Jupiter and its natural satellites. Most of the video is comprised of computer animations of the spacecraft's trajectory, encounters with the Galilean satellites Io, Europa and Ganymede, as well as their atmospheric and surface structures. Computer animations of plasma wave observations of Ganymede's magnetosphere, a surface gravity map of Io, the Galileo/Io flyby, the Galileo space probe orbit insertion around Jupiter, and actual shots of Jupiter's Great Red Spot are presented. Panoramic views of our Earth (from orbit) and moon (from orbit) as seen from Galileo as well as actual footage of the Space Shuttle/Galileo liftoff and Galileo's space probe separation are also included.

  8. The optimized gradient method for full waveform inversion and its spectral implementation

    NASA Astrophysics Data System (ADS)

    Wu, Zedong; Alkhalifah, Tariq

    2016-06-01

    At the heart of the full waveform inversion (FWI) implementation is wavefield extrapolation, and specifically its accuracy and cost. To obtain accurate, dispersion free wavefields, the extrapolation for modelling is often expensive. Combining an efficient extrapolation with a novel gradient preconditioning can render an FWI implementation that efficiently converges to an accurate model. We, specifically, recast the extrapolation part of the inversion in terms of its spectral components for both data and gradient calculation. This admits dispersion free wavefields even at large extrapolation time steps, which improves the efficiency of the inversion. An alternative spectral representation of the depth axis in terms of sine functions allows us to impose a free surface boundary condition, which reflects our medium boundaries more accurately. Using a newly derived perfectly matched layer formulation for this spectral implementation, we can define a finite model with absorbing boundaries. In order to reduce the nonlinearity in FWI, we propose a multiscale conditioning of the objective function through combining the different directional components of the gradient to optimally update the velocity. Through solving a simple optimization problem, it specifically admits the smoothest approximate update while guaranteeing its ascending direction. An application to the Marmousi model demonstrates the capability of the proposed approach and justifies our assertions with respect to cost and convergence.

  9. Implementation of an ANCF beam finite element for dynamic response optimization of elastic manipulators

    NASA Astrophysics Data System (ADS)

    Vohar, B.; Kegl, M.; Ren, Z.

    2008-12-01

    Theoretical and practical aspects of an absolute nodal coordinate formulation (ANCF) beam finite element implementation are considered in the context of dynamic transient response optimization of elastic manipulators. The proposed implementation is based on the introduction of new nodal degrees of freedom, which is achieved by an adequate nonlinear mapping between the original and new degrees of freedom. This approach preserves the mechanical properties of the ANCF beam, but converts it into a conventional finite element so that its nodal degrees of freedom are initially always equal to zero and never depend explicitly on the design variables. Consequently, the sensitivity analysis formulas can be derived in the usual manner, except that the introduced nonlinear mapping has to be taken into account. Moreover, the adjusted element can also be incorporated into general finite element analysis and optimization software in the conventional way. The introduced design variables are related to the cross-section of the beam, to the shape of the (possibly) skeletal structure of the manipulator and to the drive functions. The layered cross-section approach and the design element technique are utilized to parameterize the shape of individual elements and the whole structure. A family of implicit time integration methods is adopted for the response and sensitivity analysis. Based on this assumption, the corresponding sensitivity formulas are derived. Two numerical examples illustrate the performance of the proposed element implementation.

  10. Voyager Outreach Compilation

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This NASA JPL (Jet Propulsion Laboratory) video presents a collection of the best videos that have been published of the Voyager mission. Computer animation/simulations comprise the largest portion of the video and include outer planetary magnetic fields, outer planetary lunar surfaces, and the Voyager spacecraft trajectory. Voyager visited the four outer planets: Jupiter, Saturn, Uranus, and Neptune. The video contains some live shots of Jupiter (actual), the Earth's moon (from orbit), Saturn (actual), Neptune (actual) and Uranus (actual), but is mainly comprised of computer animations of these planets and their moons. Some of the individual short videos that are compiled are entitled: The Solar System; Voyage to the Outer Planets; A Tour of the Solar System; and the Neptune Encounter. Computerized simulations of Viewing Neptune from Triton, Diving over Neptune to Meet Triton, and Catching Triton in its Retrograde Orbit are included. Several animations of Neptune's atmosphere, rotation and weather features as well as significant discussion of the planet's natural satellites are also presented.

  11. Implementation of natural frequency analysis and optimality criterion design. [computer technique for structural analysis

    NASA Technical Reports Server (NTRS)

    Levy, R.; Chai, K.

    1978-01-01

    A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.

  12. Parameterized CAD techniques implementation for the fatigue behaviour optimization of a service chamber

    NASA Astrophysics Data System (ADS)

    Sánchez, H. T.; Estrems, M.; Franco, P.; Faura, F.

    2009-11-01

    In recent years, the market of heat exchangers is increasingly demanding new products in short cycle time, which means that both the design and manufacturing stages must be extremely reduced. The design stage can be reduced by means of CAD-based parametric design techniques. The methodology presented in this proceeding is based on the optimized control of geometric parameters of a service chamber of a heat exchanger by means of the Application Programming Interface (API) provided by the Solidworks CAD package. Using this implementation, a set of different design configurations of the service chamber made of stainless steel AISI 316 are studied by means of the FE method. As a result of this study, a set of knowledge rules based on the fatigue behaviour are constructed and integrated into the design optimization process.

  13. Optimization of coal product structure in coal plant design expert system and its computer programming implementation

    SciTech Connect

    Yaqun, H.; Shan, L.; Yali, K.; Maixi, L.

    1999-07-01

    The optimization of coal product structure is a main task in coal preparation flowsheet design. The paper thoroughly studies the scheme of coal product structure optimization in coal plant design expert system. By comparing three fitted mathematical models of raw coal washability curve and six models of distribution curve, which simulates gravity coal separation, the optimum ones are obtained. Based on the models, applying coal product profit as an objective function and utilizing method of generalized Lagrange operators to conditionally restrain yield and ash content of coal product, the optimum flowsheet for coal preparation has been finally achieved by the way of Zangwill method, which optimizes coal product structure. It provides an efficient theoretical basis for defining technical plan in coal preparation plant designing. The paper also studies the computer programming development and implementation of coal product structure optimization, applying object oriented programming method, in coal plant design expert system. The overall structure of coal plant design expert system, knowledge expressing mechanism, explaining and reasoning mechanism, as well as knowledge learning mechanism are mentioned in this paper.

  14. Optimization, tolerance analysis and implementation of a Stokes polarimeter based on the conical refraction phenomenon.

    PubMed

    Peinado, Alba; Lizana, Angel; Turpín, Alejandro; Iemmi, Claudio; Kalkandjiev, Todor K; Mompart, Jordi; Campos, Juan

    2015-03-01

    Recently, we introduced the basic concepts behind a new polarimeter device based on conical refraction (CR), which presents several appealing features compared to standard polarimeters. To name some of them, CR polarimeters retrieve the polarization state of an input light beam with a snapshot measurement, allow for substantially enhancing the data redundancy without increasing the measuring time, and avoid instrumental errors owing to rotating elements or phase-to-voltage calibration typical from dynamic devices. In this article, we present a comprehensive study of the optimization, robustness and parameters tolerance of CR based polarimeters. In addition, a particular CR based polarimetric architecture is experimentally implemented, and some concerns and recommendations are provided. Finally, the implemented polarimeter is experimentally tested by measuring different states of polarization, including fully and partially polarized light.

  15. An optimal controller for an electric ventricular-assist device: theory, implementation, and testing

    NASA Technical Reports Server (NTRS)

    Klute, G. K.; Tasch, U.; Geselowitz, D. B.

    1992-01-01

    This paper addresses the development and testing of an optimal position feedback controller for the Penn State electric ventricular-assist device (EVAD). The control law is designed to minimize the expected value of the EVAD's power consumption for a targeted patient population. The closed-loop control law is implemented on an Intel 8096 microprocessor and in vitro test runs show that this controller improves the EVAD's efficiency by 15-21%, when compared with the performance of the currently used feedforward control scheme.

  16. Reduction Optimal Trinomials for Efficient Software Implementation of the ηT Pairing

    NASA Astrophysics Data System (ADS)

    Nakajima, Toshiya; Izu, Tetsuya; Takagi, Tsuyoshi

    The ηT pairing for supersingular elliptic curves over GF(3m) has been paid attention because of its computational efficiency. Since most computation parts of the ηT pairing are GF(3m) multiplications, it is important to improve the speed of the multiplication when implementing the ηT pairing. In this paper we investigate software implementation of GF(3m) multiplication and propose using irreducible trinomials xm+axk+b over GF(3) such that k is a multiple of w, where w is the bit length of the word of targeted CPU. We call the trinomials “reduction optimal trinomials (ROTs).” ROTs actually exist for several m's and for typical values of w=16 and 32. We list them for extension degrees m=97, 167, 193, 239, 317, and 487. These m's are derived from security considerations. Using ROTs, we are able to implement efficient modulo operations (reductions) for GF(3m) multiplication compared with cases in which other types of irreducible trinomials are used (e. g., trinomials with a minimum k for each m). The reason for this is that for cases using ROTsa, the number of shift operations on multiple precision data is reduced to less than half compared with cases using other trinomials. Our implementation results show that programs of reduction specialized for ROTs are 20-30% faster on 32-bit CPU and approximately 40% faster on 16-bit CPU compared with programs using irreducible trinomials with general k.

  17. Sequential Principal Component Analysis -An Optimal and Hardware-Implementable Transform for Image Compression

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2009-01-01

    This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.

  18. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  19. Optimizing the implementation of the target motion sampling temperature treatment technique - How fast can it get?

    SciTech Connect

    Tuomas, V.; Jaakko, L.

    2013-07-01

    This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)

  20. Formulation for a practical implementation of electromagnetic induction coils optimized using stream functions

    NASA Astrophysics Data System (ADS)

    Reed, Mark A.; Scott, Waymond R.

    2016-05-01

    Continuous-wave (CW) electromagnetic induction (EMI) systems used for subsurface sensing typically employ separate transmit and receive coils placed in close proximity. The closeness of the coils is desirable for both packaging and object pinpointing; however, the coils must have as little mutual coupling as possible. Otherwise, the signal from the transmit coil will couple into the receive coil, making target detection difficult or impossible. Additionally, mineralized soil can be a significant problem when attempting to detect small amounts of metal because the soil effectively couples the transmit and receive coils. Optimization of wire coils to improve their performance is difficult but can be made possible through a stream-function representation and the use of partially convex forms. Examples of such methods have been presented previously, but these methods did not account for certain practical issues with coil implementation. In this paper, the power constraint introduced into the optimization routine is modified so that it does not penalize areas of high current. It does this by representing the coils as plates carrying surface currents and adjusting the sheet resistance to be inversely proportional to the current, which is a good approximation for a wire-wound coil. Example coils are then optimized for minimum mutual coupling, maximum sensitivity, and minimum soil response at a given height with both the earlier, constant sheet resistance and the new representation. The two sets of coils are compared both to each other and other common coil types to show the method's viability.

  1. Planning, Implementation and Optimization of Future space Missions using an Immersive Visualization Environement (IVE) Machine

    NASA Astrophysics Data System (ADS)

    Harris, E.

    Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars

  2. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  3. Optical Implementation of the Optimal Universal and Phase-Covariant Quantum Cloning Machines

    NASA Astrophysics Data System (ADS)

    Ye, Liu; Song, Xue-Ke; Yang, Jie; Yang, Qun; Ma, Yang-Cheng

    Quantum cloning relates to the security of quantum computation and quantum communication. In this paper, firstly we propose a feasible unified scheme to implement optimal 1 → 2 universal, 1 → 2 asymmetric and symmetric phase-covariant cloning, and 1 → 2 economical phase-covariant quantum cloning machines only via a beam splitter. Then 1 → 3 economical phase-covariant quantum cloning machines also can be realized by adding another beam splitter in context of linear optics. The scheme is based on the interference of two photons on a beam splitter with different splitting ratios for vertical and horizontal polarization components. It is shown that under certain condition, the scheme is feasible by current experimental technology.

  4. Learning from colleagues about healthcare IT implementation and optimization: lessons from a medical informatics listserv.

    PubMed

    Adams, Martha B; Kaplan, Bonnie; Sobko, Heather J; Kuziemsky, Craig; Ravvaz, Kourosh; Koppel, Ross

    2015-01-01

    Communication among medical informatics communities can suffer from fragmentation across multiple forums, disciplines, and subdisciplines; variation among journals, vocabularies and ontologies; cost and distance. Online communities help overcome these obstacles, but may become onerous when listservs are flooded with cross-postings. Rich and relevant content may be ignored. The American Medical Informatics Association successfully addressed these problems when it created a virtual meeting place by merging the membership of four working groups into a single listserv known as the "Implementation and Optimization Forum." A communication explosion ensued, with thousands of interchanges, hundreds of topics, commentaries from "notables," neophytes, and students--many from different disciplines, countries, traditions. We discuss the listserv's creation, illustrate its benefits, and examine its lessons for others. We use examples from the lively, creative, deep, and occasionally conflicting discussions of user experiences--interchanges about medication reconciliation, open source strategies, nursing, ethics, system integration, and patient photos in the EMR--all enhancing knowledge, collegiality, and collaboration.

  5. Welding and joining: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation is presented of NASA-developed technology in welding and joining. Topics discussed include welding equipment, techniques in welding, general bonding, joining techniques, and clamps and holding fixtures.

  6. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  7. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  8. A Concept and Implementation of Optimized Operations of Airport Surface Traffic

    NASA Technical Reports Server (NTRS)

    Jung, Yoon C.; Hoang, Ty; Montoya, Justin; Gupta, Gautam; Malik, Waqar; Tobias, Leonard

    2010-01-01

    This paper presents a new concept of optimized surface operations at busy airports to improve the efficiency of taxi operations, as well as reduce environmental impacts. The suggested system architecture consists of the integration of two decoupled optimization algorithms. The Spot Release Planner provides sequence and timing advisories to tower controllers for releasing departure aircraft into the movement area to reduce taxi delay while achieving maximum throughput. The Runway Scheduler provides take-off sequence and arrival runway crossing sequence to the controllers to maximize the runway usage. The description of a prototype implementation of this integrated decision support tool for the airport control tower controllers is also provided. The prototype decision support tool was evaluated through a human-in-the-loop experiment, where both the Spot Release Planner and Runway Scheduler provided advisories to the Ground and Local Controllers. Initial results indicate the average number of stops made by each departure aircraft in the departure runway queue was reduced by more than half when the controllers were using the advisories, which resulted in reduced taxi times in the departure queue.

  9. Final report: Compiled MPI. Cost-Effective Exascale Application Development

    SciTech Connect

    Gropp, William Douglas

    2015-12-21

    This is the final report on Compiled MPI: Cost-Effective Exascale Application Development, and summarizes the results under this project. The project investigated runtime enviroments that improve the performance of MPI (Message-Passing Interface) programs; work at Illinois in the last period of this project looked at optimizing data access optimizations expressed with MPI datatypes.

  10. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    NASA Astrophysics Data System (ADS)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  11. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  12. TUNE: Compiler-Directed Automatic Performance Tuning

    SciTech Connect

    Hall, Mary

    2014-09-18

    This project has developed compiler-directed performance tuning technology targeting the Cray XT4 Jaguar system at Oak Ridge, which has multi-core Opteron nodes with SSE-3 SIMD extensions, and the Cray XE6 Hopper system at NERSC. To achieve this goal, we combined compiler technology for model-guided empirical optimization for memory hierarchies with SIMD code generation, which have been developed by the PIs over the past several years. We examined DOE Office of Science applications to identify performance bottlenecks and apply our system to computational kernels that operate on dense arrays. Our goal for this performance-tuning technology has been to yield hand-tuned levels of performance on DOE Office of Science computational kernels, while allowing application programmers to specify their computations at a high level without requiring manual optimization. Overall, we aim to make our technology for SIMD code generation and memory hierarchy optimization a crucial component of high-productivity Petaflops computing through a close collaboration with the scientists in national laboratories.

  13. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing

  14. Weighted Implementation of Suboptimal Paths (WISP): An Optimized Algorithm and Tool for Dynamical Network Analysis.

    PubMed

    Van Wart, Adam T; Durrant, Jacob; Votapka, Lane; Amaro, Rommie E

    2014-02-11

    Allostery can occur by way of subtle cooperation among protein residues (e.g., amino acids) even in the absence of large conformational shifts. Dynamical network analysis has been used to model this cooperation, helping to computationally explain how binding to an allosteric site can impact the behavior of a primary site many ångstroms away. Traditionally, computational efforts have focused on the most optimal path of correlated motions leading from the allosteric to the primary active site. We present a program called Weighted Implementation of Suboptimal Paths (WISP) capable of rapidly identifying additional suboptimal pathways that may also play important roles in the transmission of allosteric signals. Aside from providing signal redundancy, suboptimal paths traverse residues that, if disrupted through pharmacological or mutational means, could modulate the allosteric regulation of important drug targets. To demonstrate the utility of our program, we present a case study describing the allostery of HisH-HisF, an amidotransferase from T. maritima thermotiga. WISP and its VMD-based graphical user interface (GUI) can be downloaded from http://nbcr.ucsd.edu/wisp.

  15. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization.

    PubMed

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  16. Optimizing Societal Benefit using a Systems Engineering Approach for Implementation of the GEOSS Space Segment

    NASA Technical Reports Server (NTRS)

    Killough, Brian D., Jr.; Sandford, Stephen P.; Cecil, L DeWayne; Stover, Shelley; Keith, Kim

    2008-01-01

    The Group on Earth Observations (GEO) is driving a paradigm shift in the Earth Observation community, refocusing Earth observing systems on GEO Societal Benefit Areas (SBA). Over the short history of space-based Earth observing systems most decisions have been made based on improving our scientific understanding of the Earth with the implicit assumption that this would serve society well in the long run. The space agencies responsible for developing the satellites used for global Earth observations are typically science driven. The innovation of GEO is the call for investments by space agencies to be driven by global societal needs. This paper presents the preliminary findings of an analysis focused on the observational requirements of the GEO Energy SBA. The analysis was performed by the Committee on Earth Observation Satellites (CEOS) Systems Engineering Office (SEO) which is responsible for facilitating the development of implementation plans that have the maximum potential for success while optimizing the benefit to society. The analysis utilizes a new taxonomy for organizing requirements, assesses the current gaps in spacebased measurements and missions, assesses the impact of the current and planned space-based missions, and presents a set of recommendations.

  17. Overcoming obstacles in the implementation of factorial design for assay optimization.

    PubMed

    Shaw, Robert; Fitzek, Martina; Mouchet, Elizabeth; Walker, Graeme; Jarvis, Philip

    2015-03-01

    Factorial experimental design (FED) is a powerful approach for efficient optimization of robust in vitro assays-it enables cost and time savings while also improving the quality of assays. Although it is a well-known technique, there can be considerable barriers to overcome to fully exploit it within an industrial or academic organization. The article describes a tactical roll out of FED to a scientist group through: training which demystifies the technical components and concentrates on principles and examples; a user-friendly Excel-based tool for deconvoluting plate data; output which focuses on graphical display of data over complex statistics. The use of FED historically has generally been in conjunction with automated technology; however we have demonstrated a much broader impact of FED on the assay development process. The standardized approaches we have rolled out have helped to integrate FED as a fundamental part of assay development best practice because it can be used independently of the automation and vendor-supplied software. The techniques are applicable to different types of assay, both enzyme and cell, and can be used flexibly in manual and automated processes. This article describes the application of FED for a cellular assay. The challenges of selling FED concepts and rolling out to a wide bioscience community together with recommendations for good working practices and effective implementation are discussed. The accessible nature of these approaches means FED can be used by industrial as well as academic users.

  18. Learning from colleagues about healthcare IT implementation and optimization: lessons from a medical informatics listserv.

    PubMed

    Adams, Martha B; Kaplan, Bonnie; Sobko, Heather J; Kuziemsky, Craig; Ravvaz, Kourosh; Koppel, Ross

    2015-01-01

    Communication among medical informatics communities can suffer from fragmentation across multiple forums, disciplines, and subdisciplines; variation among journals, vocabularies and ontologies; cost and distance. Online communities help overcome these obstacles, but may become onerous when listservs are flooded with cross-postings. Rich and relevant content may be ignored. The American Medical Informatics Association successfully addressed these problems when it created a virtual meeting place by merging the membership of four working groups into a single listserv known as the "Implementation and Optimization Forum." A communication explosion ensued, with thousands of interchanges, hundreds of topics, commentaries from "notables," neophytes, and students--many from different disciplines, countries, traditions. We discuss the listserv's creation, illustrate its benefits, and examine its lessons for others. We use examples from the lively, creative, deep, and occasionally conflicting discussions of user experiences--interchanges about medication reconciliation, open source strategies, nursing, ethics, system integration, and patient photos in the EMR--all enhancing knowledge, collegiality, and collaboration. PMID:25486893

  19. Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers

    NASA Technical Reports Server (NTRS)

    Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj

    1995-01-01

    The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.

  20. Optimizing revenue cycle performance before, during, and after an EHR implementation.

    PubMed

    Schuler, Margaret; Berkebile, Jane; Vallozzi, Amanda

    2016-06-01

    An electronic health record implementation brings risks of adverse revenue cycle activity. Hospitals and health systems can mitigate that risk by taking aproactive, three-phase approach: Identify potential issues prior to implementation. Create teams to oversee operations during implementation. Hold regular meetings after implementation to ensure the system is running smoothly. PMID:27451570

  1. Scheme for the implementation of 1 → 3 optimal phase-covariant quantum cloning in ion-trap systems

    NASA Astrophysics Data System (ADS)

    Yang, Rong-Can; Li, Hong-Cai; Lin, Xiu; Huang, Zhi-Ping; Xie, Hong

    2008-03-01

    This paper proposes a scheme for the implementation of 1 → 3 optimal phase-covariant quantum cloning with trapped ions. In the present protocol, the required time for the whole procedure is short due to the resonant interaction, which is important in view of decoherence. Furthermore, the scheme is feasible based on current technologies.

  2. Scheme for Implementation of Ancillary-Free 1 → 3 Optimal Phase-Covariant Quantum Cloning with Trapped Ions

    NASA Astrophysics Data System (ADS)

    Yang, Rong-Can; Li, Hong-Cai; Lin, Xiu; Huang, Zhi-Ping; Xie, Hong

    2008-06-01

    We propose a simple scheme for the implementation of the ancillary-free 1 → 3 optimal phase-covariant quantum cloning for x-y equatorial qubits in ion-trap system. In the scheme, the vibrational mode is only virtually excited, which is very important in view of decoherence. The present proposal can be realized based on current available technologies.

  3. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    SciTech Connect

    Amarasinghe, Saman

    2015-03-27

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for different convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.

  4. Cables and connectors: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A technological compilation on devices and techniques for various types of electrical cables and connections is presented. Data are reported under three sections: flat conductor cable technology, newly developed electrical connectors, and miscellaneous articles and information on cables and connector techniques.

  5. 1988 Bulletin compilation and index

    SciTech Connect

    1989-02-01

    This document is published to provide current information about the national program for managing spent fuel and high-level radioactive waste. This document is a compilation of issues from the 1988 calendar year. A table of contents and one index have been provided to assist in finding information.

  6. Compiler validates units and dimensions

    NASA Technical Reports Server (NTRS)

    Levine, F. E.

    1980-01-01

    Software added to compiler for automated test system for Space Shuttle decreases computer run errors by providing offline validation of engineering units used system command programs. Validation procedures are general, though originally written for GOAL, a free-form language that accepts "English-like" statements, and may be adapted to other programming languages.

  7. Yes! An object-oriented compiler compiler (YOOCC)

    SciTech Connect

    Avotins, J.; Mingins, C.; Schmidt, H.

    1995-12-31

    Grammar-based processor generation is one of the most widely studied areas in language processor construction. However, there have been very few approaches to date that reconcile object-oriented principles, processor generation, and an object-oriented language. Pertinent here also. is that currently to develop a processor using the Eiffel Parse libraries requires far too much time to be expended on tasks that can be automated. For these reasons, we have developed YOOCC (Yes! an Object-Oriented Compiler Compiler), which produces a processor framework from a grammar using an enhanced version of the Eiffel Parse libraries, incorporating the ideas hypothesized by Meyer, and Grape and Walden, as well as many others. Various essential changes have been made to the Eiffel Parse libraries. Examples are presented to illustrate the development of a processor using YOOCC, and it is concluded that the Eiffel Parse libraries are now not only an intelligent, but also a productive option for processor construction.

  8. Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial: Design, rationale and implementation

    PubMed Central

    Baraniuk, Sarah; Tilley, Barbara C.; del Junco, Deborah J.; Fox, Erin E.; van Belle, Gerald; Wade, Charles E.; Podbielski, Jeanette M.; Beeler, Angela M.; Hess, John R.; Bulger, Eileen M.; Schreiber, Martin A.; Inaba, Kenji; Fabian, Timothy C.; Kerby, Jeffrey D.; Cohen, Mitchell J.; Miller, Christopher N.; Rizoli, Sandro; Scalea, Thomas M.; O’Keeffe, Terence; Brasel, Karen J.; Cotton, Bryan A.; Muskat, Peter; Holcomb, John B.

    2014-01-01

    Background Forty percent of in-hospital deaths among injured patients involve massive truncal hemorrhage. These deaths may be prevented with rapid hemorrhage control and improved resuscitation techniques. The Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial was designed to determine if there is a difference in mortality between subjects who received different ratios of FDA approved blood products. This report describes the design and implementation of PROPPR. Study Design PROPPR was designed as a randomized, two-group, Phase III trial conducted in subjects with the highest level of trauma activation and predicted to have a massive transfusion. Subjects at 12 North American level 1 trauma centers were randomized into one of two standard transfusion ratio interventions: 1:1:1 or 1:1:2, (plasma, platelets, and red blood cells). Clinical data and serial blood samples were collected under Exception from Informed Consent (EFIC) regulations. Co-primary mortality endpoints of 24 hours and 30 days were evaluated. Results Between August 2012 and December 2013, 680 patients were randomized. The overall median time from admission to randomization was 26 minutes. PROPPR enrolled at higher than expected rates with fewer than expected protocol deviations. Conclusion PROPPR is the largest randomized study to enroll severely bleeding patients. This study showed that rapidly enrolling and successfully providing randomized blood products to severely injured patients in an EFIC study is feasible. PROPPR was able to achieve these goals by utilizing a collaborative structure and developing successful procedures and design elements that can be part of future trauma studies. PMID:24996573

  9. Optimizing the Physical Implementation of an Eddy-covariance System to Minimize Flow Distortion

    NASA Astrophysics Data System (ADS)

    Durden, D.; Zulueta, R. C.; Durden, N. P.; Metzger, S.; Luo, H.; Duvall, B.

    2015-12-01

    The eddy-covariance technique is widely applied to observe the exchange of energy and scalars between the earth's surface and its atmosphere. In practice, fast (≥10 Hz) sonic anemometry and enclosed infrared gas spectroscopy are used to determine fluctuations in the 3-D wind vector and trace gas concentrations, respectively. Here, two contradicting requirements need to be fulfilled: (i) the sonic anemometer and trace gas analyzer should sample the same air volume, while (ii) the presence of the gas analyzer should not affect the wind field measured by the 3-D sonic anemometer. To determine the optimal positioning of these instruments with respect to each other, a trade-off study was performed. Theoretical formulations were used to determine a range of positions between the sonic anemometer and the gas analyzer that minimize the sum of (i) decorrelation error and (ii) wind blocking error. Subsequently, the blocking error induced by the presence of the gas sampling system was experimentally tested for a range of wind directions to verify the model-predicted placement: In a controlled environment the sonic anemometer was placed in the directed flow from a fan outfitted with a large shroud, with and without the presence of the enclosed gas analyzer and its sampling system. Blocking errors were enhanced by up to 10% for wind directions deviating ≥130° from frontal, when the flow was coming from the side where the enclosed gas analyzer was mounted. Consequently, we suggest a lateral position of the enclosed gas analyzer towards the aerodynamic wake of the tower, as data from this direction is likely affected by tower-induced flow distortion already. Ultimately, this physical implementation of the sonic anemometer and enclosed gas analyzer resulted in decorrelation and blocking errors ≤5% for ≥70% of all wind directions. These findings informed the design of the National Ecological Observatory Network's (NEON) eddy-covariance system, which is currently being

  10. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2016-10-04

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  11. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  12. A robust scheme for implementing optimal economical phase-covariant quantum cloning with quantum-dot spins in optical microcavities

    NASA Astrophysics Data System (ADS)

    Jin, Zhao; Ji, Yan-Qiang; Zhu, Ai-Dong; Wang, Hong-Fu; Zhang, Shou

    2014-03-01

    We present a scheme to implement an optimal symmetric 1→2 economical phase-covariant quantum cloning machine (EPCCM) with quantum dot (QD) spins in optical microcavities by using a photon as a data bus. The EPCCM copies deterministically the quantum states on southern or northern Bloch hemisphere from one QD spin to two with an optimal fidelity. By analyzing the fidelity of quantum cloning we confirm that it is robust against the dissipation caused by cavity decay, side leakage and dipole decay. For a strong coupling regime, the cloning fidelity approaches a stable optimal bound. Even in a weak coupling regime, it can also achieve a satisfactory high value close to the optimal bound.

  13. Cables and connectors: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented that reflects the uses, adaptation, and maintenance plus service, that are innovations derived from problem solutions in the space R and D programs, both in house and by NASA and AEC contractors. Data cover: (1) technology revelant to the employment of flat conductor cables and their adaptation to and within conventional systems, (2) connectors and various adaptations, and (3) maintenance and service technology, and shop hints useful in the installation and care of cables and connectors.

  14. Electronic control circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A compilation of technical R and D information on circuits and modular subassemblies is presented as a part of a technology utilization program. Fundamental design principles and applications are given. Electronic control circuits discussed include: anti-noise circuit; ground protection device for bioinstrumentation; temperature compensation for operational amplifiers; hybrid gatling capacitor; automatic signal range control; integrated clock-switching control; and precision voltage tolerance detector.

  15. Utilizing object-oriented design to build advanced optimization strategies with generic implementation

    SciTech Connect

    Eldred, M.S.; Hart, W.E.; Bohnhoff, W.J.; Romero, V.J.; Hutchinson, S.A.; Salinger, A.G.

    1996-08-01

    the benefits of applying optimization to computational models are well known, but their range of widespread application to date has been limited. This effort attempts to extend the disciplinary areas to which optimization algorithms may be readily applied through the development and application of advanced optimization strategies capable of handling the computational difficulties associated with complex simulation codes. Towards this goal, a flexible software framework is under continued development for the application of optimization techniques to broad classes of engineering applications, including those with high computational expense and nonsmooth, nonconvex design space features. Object-oriented software design with C++ has been employed as a tool in providing a flexible, extensible, and robust multidisciplinary toolkit with computationally intensive simulations. In this paper, demonstrations of advanced optimization strategies using the software are presented in the hybridization and parallel processing research areas. Performance of the advanced strategies is compared with a benchmark nonlinear programming optimization.

  16. Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term Low-Cost Implementation

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Ballin, Mark G.; Koczo, Stefan, Jr.; Vivona, Robert A.; Henderson, Jeffrey M.

    2013-01-01

    The concept of Traffic Aware Strategic Aircrew Requests (TASAR) combines Automatic Dependent Surveillance Broadcast (ADS-B) IN and airborne automation to enable user-optimal in-flight trajectory replanning and to increase the likelihood of Air Traffic Control (ATC) approval for the resulting trajectory change request. TASAR is designed as a near-term application to improve flight efficiency or other user-desired attributes of the flight while not impacting and potentially benefiting ATC. Previous work has indicated the potential for significant benefits for each TASAR-equipped aircraft. This paper will discuss the approach to minimizing TASAR's cost for implementation and accelerating readiness for near-term implementation.

  17. Implementation of reactive and predictive real-time control strategies to optimize dry stormwater detention ponds

    NASA Astrophysics Data System (ADS)

    Gaborit, Étienne; Anctil, François; Vanrolleghem, Peter A.; Pelletier, Geneviève

    2013-04-01

    Dry detention ponds have been widely implemented in U.S.A (National Research Council, 1993) and Canada (Shammaa et al. 2002) to mitigate the impacts of urban runoff on receiving water bodies. The aim of such structures is to allow a temporary retention of the water during rainfall events, decreasing runoff velocities and volumes (by infiltration in the pond) as well as providing some water quality improvement from sedimentation. The management of dry detention ponds currently relies on static control through a fixed pre-designed limitation of their maximum outflow (Middleton and Barrett 2008), for example via a proper choice of their outlet pipe diameter. Because these ponds are designed for large storms, typically 1- or 2-hour duration rainfall events with return periods comprised between 5 and 100 years, one of their main drawbacks is that they generally offer almost no retention for smaller rainfall events (Middleton and Barrett 2008), which are by definition much more common. Real-Time Control (RTC) has a high potential for optimizing retention time (Marsalek 2005) because it allows adopting operating strategies that are flexible and hence more suitable to the prevailing fluctuating conditions than static control. For dry ponds, this would basically imply adapting the outlet opening percentage to maximize water retention time, while being able to open it completely for severe storms. This study developed several enhanced RTC scenarios of a dry detention pond located at the outlet of a small urban catchment near Québec City, Canada, following the previous work of Muschalla et al. (2009). The catchment's runoff quantity and TSS concentration were simulated by a SWMM5 model with an improved wash-off formulation. The control procedures rely on rainfall detection and measures of the pond's water height for the reactive schemes, and on rainfall forecasts in addition to these variables for the predictive schemes. The automatic reactive control schemes implemented

  18. Linear optical implementation of ancilla-free 1→3 optimal phase covariant quantum cloning machines for the equatorial qubits

    NASA Astrophysics Data System (ADS)

    Zou, Xubo; Mathis, W.

    2005-08-01

    We propose experimental schemes to implement ancilla-free 1→3 optimal phase covariant quantum cloning machines for x-y and x-z equatorial qubits by interfering a polarized photon, which we wish to clone, with different light resources at a six-port symmetric beam splitter. The scheme requires linear optical elements and three-photon coincidence detection, and is feasible with current experimental technology.

  19. Linear optical implementation of ancilla-free 1{yields}3 optimal phase covariant quantum cloning machines for the equatorial qubits

    SciTech Connect

    Zou Xubo; Mathis, W.

    2005-08-15

    We propose experimental schemes to implement ancilla-free 1{yields}3 optimal phase covariant quantum cloning machines for x-y and x-z equatorial qubits by interfering a polarized photon, which we wish to clone, with different light resources at a six-port symmetric beam splitter. The scheme requires linear optical elements and three-photon coincidence detection, and is feasible with current experimental technology.

  20. Optimizing State Policy Implementation: The Case of the Scientific Based Research Components of the NCLB Act

    ERIC Educational Resources Information Center

    Mohammed, Shereeza; Pisapia, John; Walker, David A.

    2009-01-01

    A hypothesized model of state implementation of federal policy was extracted from empirical studies to discover the strategies states can use to gain compliance more cost effectively. Sixteen factors were identified and applied to the implementation of the Scientific Based Research provisions of the No Child Left Behind Act. Data collected from…

  1. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  2. Branch recovery with compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Li, C.-C.; Fuchs, W. K.; Hwu, W.-M.

    1992-01-01

    In processing systems where rapid recovery from transient faults is important, schemes for multiple instruction rollback recovery may be appropriate. Multiple instruction retry has been implemented in hardware by researchers and also in mainframe computers. This paper extends compiler-assisted instruction retry to a broad class of code execution failures. Five benchmarks were used to measure the performance penalty of hazard resolution. Results indicate that the enhanced pure software approach can produce performance penalties consistent with existing hardware techniques. A combined compiler/hardware resolution strategy is also described and evaluated. Experimental results indicate a lower performance penalty than with either a totally hardware or totally software approach.

  3. Rooted-tree network for optimal non-local gate implementation

    NASA Astrophysics Data System (ADS)

    Vyas, Nilesh; Saha, Debashis; Panigrahi, Prasanta K.

    2016-09-01

    A general quantum network for implementing non-local control-unitary gates, between remote parties at minimal entanglement cost, is shown to be a rooted-tree structure. Starting from a five-party scenario, we demonstrate the local implementation of simultaneous class of control-unitary(Hermitian) and multiparty control-unitary gates in an arbitrary n-party network. Previously, established networks are turned out to be special cases of this general construct.

  4. Rooted-tree network for optimal non-local gate implementation

    NASA Astrophysics Data System (ADS)

    Vyas, Nilesh; Saha, Debashis; Panigrahi, Prasanta K.

    2016-06-01

    A general quantum network for implementing non-local control-unitary gates, between remote parties at minimal entanglement cost, is shown to be a rooted-tree structure. Starting from a five-party scenario, we demonstrate the local implementation of simultaneous class of control-unitary(Hermitian) and multiparty control-unitary gates in an arbitrary n-party network. Previously, established networks are turned out to be special cases of this general construct.

  5. 14 CFR § 1203.302 - Compilation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 5 2014-01-01 2014-01-01 false Compilation. § 1203.302 Section § 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302 Compilation. A compilation of items that are...

  6. A new module for constrained multi-fragment geometry optimization in internal coordinates implemented in the MOLCAS package.

    PubMed

    Vysotskiy, Victor P; Boström, Jonas; Veryazov, Valera

    2013-11-15

    A parallel procedure for an effective optimization of relative position and orientation between two or more fragments has been implemented in the MOLCAS program package. By design, the procedure does not perturb the electronic structure of a system under the study. The original composite system is divided into frozen fragments and internal coordinates linking those fragments are the only optimized parameters. The procedure is capable to handle fully independent (no border atoms) fragments as well as fragments connected by covalent bonds. In the framework of the procedure, the optimization of relative position and orientation of the fragments are carried out in the internal "Z-matrix" coordinates using numerical derivatives. The total number of required single points energy evaluations scales with the number of fragments rather than with the total number of atoms in the system. The accuracy and the performance of the procedure have been studied by test calculations for a representative set of two- and three-fragment molecules with artificially distorted structures. The developed approach exhibits robust and smooth convergence to the reference optimal structures. As only a few internal coordinates are varied during the procedure, the proposed constrained fragment geometry optimization can be afforded even for high level ab initio methods like CCSD(T) and CASPT2. This capability has been demonstrated by applying the method to two larger cases, CCSD(T) and CASPT2 calculations on a positively charged benzene lithium complex and on the oxygen molecule interacting to iron porphyrin molecule, respectively.

  7. A new module for constrained multi-fragment geometry optimization in internal coordinates implemented in the MOLCAS package.

    PubMed

    Vysotskiy, Victor P; Boström, Jonas; Veryazov, Valera

    2013-11-15

    A parallel procedure for an effective optimization of relative position and orientation between two or more fragments has been implemented in the MOLCAS program package. By design, the procedure does not perturb the electronic structure of a system under the study. The original composite system is divided into frozen fragments and internal coordinates linking those fragments are the only optimized parameters. The procedure is capable to handle fully independent (no border atoms) fragments as well as fragments connected by covalent bonds. In the framework of the procedure, the optimization of relative position and orientation of the fragments are carried out in the internal "Z-matrix" coordinates using numerical derivatives. The total number of required single points energy evaluations scales with the number of fragments rather than with the total number of atoms in the system. The accuracy and the performance of the procedure have been studied by test calculations for a representative set of two- and three-fragment molecules with artificially distorted structures. The developed approach exhibits robust and smooth convergence to the reference optimal structures. As only a few internal coordinates are varied during the procedure, the proposed constrained fragment geometry optimization can be afforded even for high level ab initio methods like CCSD(T) and CASPT2. This capability has been demonstrated by applying the method to two larger cases, CCSD(T) and CASPT2 calculations on a positively charged benzene lithium complex and on the oxygen molecule interacting to iron porphyrin molecule, respectively. PMID:24006272

  8. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  9. Optimizing Blocking and Nonblocking Reduction Operations for Multicore Systems: Hierarchical Design and Implementation

    SciTech Connect

    Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L; Ladd, Joshua S; Sampath, Rahul S

    2013-01-01

    Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth of hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate

  10. Efficient implementation and application of the artificial bee colony algorithm to low-dimensional optimization problems

    NASA Astrophysics Data System (ADS)

    von Rudorff, Guido Falk; Wehmeyer, Christoph; Sebastiani, Daniel

    2014-06-01

    We adapt a swarm-intelligence-based optimization method (the artificial bee colony algorithm, ABC) to enhance its parallel scaling properties and to improve the escaping behavior from deep local minima. Specifically, we apply the approach to the geometry optimization of Lennard-Jones clusters. We illustrate the performance and the scaling properties of the parallelization scheme for several system sizes (5-20 particles). Our main findings are specific recommendations for ranges of the parameters of the ABC algorithm which yield maximal performance for Lennard-Jones clusters and Morse clusters. The suggested parameter ranges for these different interaction potentials turn out to be very similar; thus, we believe that our reported values are fairly general for the ABC algorithm applied to chemical optimization problems.

  11. Compiling global name-space programs for distributed execution

    NASA Technical Reports Server (NTRS)

    Koelbel, Charles; Mehrotra, Piyush

    1990-01-01

    Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.

  12. Novel coagulation factor concentrates: issues relating to their clinical implementation and pharmacokinetic assessment for optimal prophylaxis in haemophilia patients.

    PubMed

    Ljung, R; Auerswald, G; Benson, G; Jetter, A; Jiménez-Yuste, V; Lambert, T; Morfini, M; Remor, E; Sørensen, B; Salek, S Z

    2013-07-01

    Prophylaxis is considered the optimal treatment regimen for patients with severe haemophilia, and may be especially important in the prevention of joint disease. Novel coagulation factor concentrates with prolonged half-lives promise to improve patient treatment by enabling prophylaxis with less frequent dosing. With the call to individualize therapy in haemophilia, there is growing awareness of the need to use pharmacokinetic (PK) assessments to tailor prophylaxis. However, for new factor concentrates, it is not yet known which PK values will be most informative for optimizing prophylaxis. This topic was explored at the Eighth Zurich Haemophilia Forum. On the basis of our clinical experience and a discussion of the literature, we report key issues relating to the PK assessment of new coagulation factors and include suggestions on the implementation of PK data to optimize therapy. As both inter- and intra-individual variability in factor half-life have been reported, we suggest that frequent PK assessments should be conducted. However, to diminish the burden of more frequent sampling, sparser sampling strategies and the use of population modelling should be considered. Guidelines on how to assay new factor concentrates, and which PK parameters should be measured, are needed. Concerns were raised regarding the possibility of breakthrough bleeding, and current thinking on how to prevent breakthrough bleeding may no longer be appropriate. Finally, as treatment adherence may be more important to ensure that a therapeutic level of a new coagulation factor concentrate is maintained, behavioural techniques could be implemented to help to improve treatment adherence.

  13. Implementation of a Low-Thrust Trajectory Optimization Algorithm for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Sims, Jon A.; Finlayson, Paul A.; Rinderle, Edward A.; Vavrina, Matthew A.; Kowalkowski, Theresa D.

    2006-01-01

    A tool developed for the preliminary design of low-thrust trajectories is described. The trajectory is discretized into segments and a nonlinear programming method is used for optimization. The tool is easy to use, has robust convergence, and can handle many intermediate encounters. In addition, the tool has a wide variety of features, including several options for objective function and different low-thrust propulsion models (e.g., solar electric propulsion, nuclear electric propulsion, and solar sail). High-thrust, impulsive trajectories can also be optimized.

  14. Approximate knowledge compilation: The first order case

    SciTech Connect

    Val, A. del

    1996-12-31

    Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

  15. Distributed memory compiler design for sparse problems

    NASA Technical Reports Server (NTRS)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  16. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  17. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  18. Orbital-Optimized Second-Order Perturbation Theory with Density-Fitting and Cholesky Decomposition Approximations: An Efficient Implementation.

    PubMed

    Bozkaya, Uğur

    2014-06-10

    An efficient implementation of the orbital-optimized second-order perturbation theory with the density-fitting (DF-OMP2) and Cholesky decomposition (CD-OMP2) approaches is presented. The DF-OMP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost with the conventional orbital-optimized MP2 (OMP2) [Bozkaya, U.; Turney, J. M.; Yamaguchi, Y.; Schaefer, H. F.; Sherrill, C. D. J. Chem. Phys. 2011, 135, 104103] and the orbital-optimized MP2 with the resolution of the identity approach (OO-RI-MP2) [Neese, F.; Schwabe, T.; Kossmann, S.; Schirmer, B.; Grimme, S. J. Chem. Theory Comput. 2009, 5, 3060-3073]. Our results demonstrate that the DF-OMP2 method provides substantially lower computational costs than OMP2 and OO-RI-MP2. Further application results show that the orbital-optimized methods are very beneficial for the computation of open-shell noncovalent interactions. Considering both computational efficiency and the accuracy of the DF-OMP2 method, we conclude that DF-OMP2 is very promising for the study of weak interactions in open-shell molecular systems.

  19. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  20. Heuristic optimization methods for run-time intensive models (Dynamically Dimensioned Search, Particle Swarm Optimization, GA) - a comparison of performance and parallel implementation using R

    NASA Astrophysics Data System (ADS)

    Francke, Till; Bronster, Axel; Shoemaker, Christine A.

    2010-05-01

    Calibrating complex hydrological models faces two major challenges: firstly, extended models, especially when spatially distributed, encompass a large number of parameters with different (and possibly a-priori unknown) sensitivity. Due to the usually rough surface of the objective function, this aggravates the risk of an algorithm to converge in a local optimum. Thus, gradient-based optimization methods are often bound to fail without a very good prior estimate. Secondly, despite growing computational power, it is not uncommon that models of large extent in space or time take several minutes to run, which severely restricts the total number of model evaluations under given computational and time resources. While various heuristic methods successfully address the first challenge, they tend to conflict with the second challenge due to the increased number of evaluations necessary. In that context we analyzed three methods (Dynamically Dimensioned Search / DDS, Particle Swarm Optimization / PSO, Genetic Algorithms /GA). We performed tests with common "synthetic" objective functions and a calibration of the hydrological model WASA-SED with different number of parameters. When looking at the reduction of the objective function within few (i.e.< 1000) evaluations, the methods generally perform in the order (best to worst) DDS-PSO-GA. Only at a larger number, GA can excel. To speed up optimization, we executed DDS and PSO as parallel applications, i.e. using multiple CPUs and/or computers. The parallelisation has been implemented in the ppso-package for the free computation environment R. Special focus has been laid onto the options to resume interrupted optimization runs and visualize progress.

  1. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices.

    PubMed

    Marin, Leandro; Pawlowski, Marcin Piotr; Jara, Antonio

    2015-01-01

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol. PMID:26343677

  2. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices.

    PubMed

    Marin, Leandro; Pawlowski, Marcin Piotr; Jara, Antonio

    2015-08-28

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol.

  3. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices

    PubMed Central

    Marin, Leandro; Piotr Pawlowski, Marcin; Jara, Antonio

    2015-01-01

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol. PMID:26343677

  4. Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark

    SciTech Connect

    Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik; Deshpande, Anand M.; Straalen, Brian Van; Smelyanskiy, Mikhail; Almgren, Ann; Dubey, Pradeep; Shalf, John; Oliker, Leonid

    2012-12-01

    Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding, dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.

  5. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  6. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  7. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  8. Implementation and optimization of the thin-strut formalism in THREDE

    NASA Astrophysics Data System (ADS)

    Holland, R.; Simpson, L.

    1980-12-01

    The paper deals with the derivation of the thin-strut formalism and its implementation in THREDE, a three-dimensional EMP time-domain finite-difference code, to permit inclusion of arbitrary fine wires in THREDE without reducing the mesh size to the wire size. Thin-strut THREDE calculations are compared with analytic EMP solutions for a linear dipole antenna. An error of 7% is fundamental to the basic THREDE approximations.

  9. Implementation of a Surgical Safety Checklist: Interventions to Optimize the Process and Hints to Increase Compliance

    PubMed Central

    Sendlhofer, Gerald; Mosbacher, Nina; Karina, Leitgeb; Kober, Brigitte; Jantscher, Lydia; Berghold, Andrea; Pregartner, Gudrun; Brunner, Gernot; Kamolz, Lars Peter

    2015-01-01

    Background A surgical safety checklist (SSC) was implemented and routinely evaluated within our hospital. The purpose of this study was to analyze compliance, knowledge of and satisfaction with the SSC to determine further improvements. Methods The implementation of the SSC was observed in a pilot unit. After roll-out into each operating theater, compliance with the SSC was routinely measured. To assess subjective and objective knowledge, as well as satisfaction with the SSC implementation, an online survey (N = 891) was performed. Results During two test runs in a piloting unit, 305 operations were observed, 175 in test run 1 and 130 in test run 2. The SSC was used in 77.1% of all operations in test run 1 and in 99.2% in test run 2. Within used SSCs, completion rates were 36.3% in test run 1 and 1.6% in test run 2. After roll-out, three unannounced audits took place and showed that the SSC was used in 95.3%, 91.9% and 89.9%. Within used SSCs, completion rates decreased from 81.7% to 60.6% and 53.2%. In 2014, 164 (18.4%) operating team members responded to the online survey, 160 of which were included in the analysis. 146 (91.3%) consultants and nursing staff reported to use the SSC regularly in daily routine. Conclusion These data show that the implementation of new tools such as the adapted WHO SSC needs constant supervision and instruction until it becomes self-evident and accepted. Further efforts, consisting mainly of hands-on leadership and training are necessary. PMID:25658317

  10. Scalar and Parallel Optimized Implementation of the Direct Simulation Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Dietrich, Stefan; Boyd, Iain D.

    1996-07-01

    This paper describes a new concept for the implementation of the direct simulation Monte Carlo (DSMC) method. It uses a localized data structure based on a computational cell to achieve high performance, especially on workstation processors, which can also be used in parallel. Since the data structure makes it possible to freely assign any cell to any processor, a domain decomposition can be found with equal calculation load on each processor while maintaining minimal communication among the nodes. Further, the new implementation strictly separates physical modeling, geometrical issues, and organizational tasks to achieve high maintainability and to simplify future enhancements. Three example flow configurations are calculated with the new implementation to demonstrate its generality and performance. They include a flow through a diverging channel using an adapted unstructured triangulated grid, a flow around a planetary probe, and an internal flow in a contactor used in plasma physics. The results are validated either by comparison with results obtained from other simulations or by comparison with experimental data. High performance on an IBM SP2 system is achieved if problem size and number of parallel processors are adapted accordingly. On 400 nodes, DSMC calculations with more than 100 million particles are possible.

  11. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. PMID:27084318

  12. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  13. Expected treatment dose construction and adaptive inverse planning optimization: Implementation for offline head and neck cancer adaptive radiotherapy

    SciTech Connect

    Yan Di; Liang Jian

    2013-02-15

    : Adaptive treatment modification can be implemented including the expected treatment dose in the adaptive inverse planning optimization. The retrospective evaluation results demonstrate that utilizing the weekly adaptive inverse planning optimization, the dose distribution of h and n cancer treatment can be largely improved.

  14. A survey of compiler development aids. [concerning lexical, syntax, and semantic analysis

    NASA Technical Reports Server (NTRS)

    Buckles, B. P.; Hodges, B. C.; Hsia, P.

    1977-01-01

    A theoretical background was established for the compilation process by dividing it into five phases and explaining the concepts and algorithms that underpin each. The five selected phases were lexical analysis, syntax analysis, semantic analysis, optimization, and code generation. Graph theoretical optimization techniques were presented, and approaches to code generation were described for both one-pass and multipass compilation environments. Following the initial tutorial sections, more than 20 tools that were developed to aid in the process of writing compilers were surveyed. Eight of the more recent compiler development aids were selected for special attention - SIMCMP/STAGE2, LANG-PAK, COGENT, XPL, AED, CWIC, LIS, and JOCIT. The impact of compiler development aids were assessed some of their shortcomings and some of the areas of research currently in progress were inspected.

  15. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first

  16. Design and implementation of a delay-optimized universal programmable routing circuit for FPGAs

    NASA Astrophysics Data System (ADS)

    Fang, Wu; Huowen, Zhang; Jinmei, Lai; Yuan, Wang; Liguang, Chen; Lei, Duan; Jiarong, Tong

    2009-06-01

    This paper presents a universal field programmable gate array (FPGA) programmable routing circuit, focusing primarily on a delay optimization. Under the precondition of the routing resource's flexibility and routability, the number of programmable interconnect points (PIP) is reduced, and a multiplexer (MUX) plus a BUFFER structure is adopted as the programmable switch. Also, the method of offset lines and the method of complementary hanged end-lines are applied to the TILE routing circuit and the I/O routing circuit, respectively. All of the above features ensure that the whole FPGA chip is highly repeatable, and the signal delay is uniform and predictable over the total chip. Meanwhile, the BUFFER driver is optimized to decrease the signal delay by up to 5%. The proposed routing circuit is applied to the Fudan programmable device (FDP) FPGA, which has been taped out with an SMIC 0.18-μm logic 1P6M process. The test result shows that the programmable routing resource works correctly, and the signal delay over the chip is highly uniform and predictable.

  17. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  18. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  19. Toward a Fundamental Theory of Optimal Feature Selection: Part II-Implementation and Computational Complexit.

    PubMed

    Morgera, S D

    1987-01-01

    Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules. These modules may be implemented as linear arrays of processing elements having at most O(N) elements where N is the input data vector dimension. The computations may be done in O(N) time steps. This compares favorably to O(N3) operations for a conventional, or general, rotation-based eigensystem solver and even the O(2N2) operations using an approach incorporating the fast Levinson algorithm for a matrix of Toeplitz structure since the underlying matrix in this work does not possess a Toeplitz structure. Some examples are provided on the convergence of a conventional iterative approach and a novel two-stage iterative method for eigensystem decomposition.

  20. Optimization and Implementation of Scaling-Free CORDIC-Based Direct Digital Frequency Synthesizer for Body Care Area Network Systems

    PubMed Central

    Juang, Ying-Shen; Ko, Lu-Ting; Chen, Jwu-E.; Sung, Tze-Yun; Hsin, Hsi-Chin

    2012-01-01

    Coordinate rotation digital computer (CORDIC) is an efficient algorithm for computations of trigonometric functions. Scaling-free-CORDIC is one of the famous CORDIC implementations with advantages of speed and area. In this paper, a novel direct digital frequency synthesizer (DDFS) based on scaling-free CORDIC is presented. The proposed multiplier-less architecture with small ROM and pipeline data path has advantages of high data rate, high precision, high performance, and less hardware cost. The design procedure with performance and hardware analysis for optimization has also been given. It is verified by Matlab simulations and then implemented with field programmable gate array (FPGA) by Verilog. The spurious-free dynamic range (SFDR) is over 86.85 dBc, and the signal-to-noise ratio (SNR) is more than 81.12 dB. The scaling-free CORDIC-based architecture is suitable for VLSI implementations for the DDFS applications in terms of hardware cost, power consumption, SNR, and SFDR. The proposed DDFS is very suitable for medical instruments and body care area network systems. PMID:23251230

  1. Optimization and implementation of scaling-free CORDIC-based direct digital frequency synthesizer for body care area network systems.

    PubMed

    Juang, Ying-Shen; Ko, Lu-Ting; Chen, Jwu-E; Sung, Tze-Yun; Hsin, Hsi-Chin

    2012-01-01

    Coordinate rotation digital computer (CORDIC) is an efficient algorithm for computations of trigonometric functions. Scaling-free-CORDIC is one of the famous CORDIC implementations with advantages of speed and area. In this paper, a novel direct digital frequency synthesizer (DDFS) based on scaling-free CORDIC is presented. The proposed multiplier-less architecture with small ROM and pipeline data path has advantages of high data rate, high precision, high performance, and less hardware cost. The design procedure with performance and hardware analysis for optimization has also been given. It is verified by Matlab simulations and then implemented with field programmable gate array (FPGA) by Verilog. The spurious-free dynamic range (SFDR) is over 86.85 dBc, and the signal-to-noise ratio (SNR) is more than 81.12 dB. The scaling-free CORDIC-based architecture is suitable for VLSI implementations for the DDFS applications in terms of hardware cost, power consumption, SNR, and SFDR. The proposed DDFS is very suitable for medical instruments and body care area network systems.

  2. Optimization of the Coupled Cluster Implementation in NWChem on Petascale Parallel Architectures

    SciTech Connect

    Anisimov, Victor; Bauer, Gregory H.; Chadalavada, Kalyana; Olson, Ryan M.; Glenski, Joseph W.; Kramer, William T.; Apra, Edoardo; Kowalski, Karol

    2014-09-04

    Coupled cluster singles and doubles (CCSD) algorithm has been optimized in NWChem software package. This modification alleviated the communication bottleneck and provided from 2- to 5-fold speedup in the CCSD iteration time depending on the problem size and available memory. Sustained 0.60 petaflop/sec performance on CCSD(T) calculation has been obtained on NCSA Blue Waters. This number included all stages of the calculation from initialization till termination, iterative computation of single and double excitations, and perturbative accounting for triple excitations. In the section of perturbative triples alone, the computation maintained 1.18 petaflop/sec performance level. CCSD computations have been performed on Guanine-Cytosine deoxydinucleotide monophosphate (GC-dDMP) to probe the conformational energy difference in DNA single strand in A- and B-conformations. The computation revealed significant discrepancy between CCSD and classical force fields in prediction of relative energy of A- and B-conformations of GC-dDMP.

  3. Modified a* Algorithm Implementation in the Routing Optimized for Use in Geospatial Information Systems

    NASA Astrophysics Data System (ADS)

    Ayazi, S. M.; Mashhorroudi, M. F.; Ghorbani, M.

    2014-10-01

    Among the main issues in the theory of geometric grids on spatial information systems, is the problem of finding the shortest path routing between two points. In this paper tried to using the graph theory and A* algorithms in transport management, the optimal method to find the shortest path with shortest time condition to be reviewed. In order to construct a graph that consists of a network of pathways and modelling of physical and phasing area, the shortest path routes, elected with the use of the algorithm is modified A*.At of the proposed method node selection Examining angle nodes the desired destination node and the next node is done. The advantage of this method is that due to the elimination of some routes, time of route calculation is reduced.

  4. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  5. Different Scalable Implementations of Collision and Streaming for Optimal Computational Performance of Lattice Boltzmann Simulations

    NASA Astrophysics Data System (ADS)

    Geneva, Nicholas; Wang, Lian-Ping

    2015-11-01

    In the past 25 years, the mesoscopic lattice Boltzmann method (LBM) has become an increasingly popular approach to simulate incompressible flows including turbulent flows. While LBM solves more solution variables compared to the conventional CFD approach based on the macroscopic Navier-Stokes equation, it also offers opportunities for more efficient parallelization. In this talk we will describe several different algorithms that have been developed over the past 10 plus years, which can be used to represent the two core steps of LBM, collision and streaming, more effectively than standard approaches. The application of these algorithms spans LBM simulations ranging from basic channel to particle laden flows. We will cover the essential detail on the implementation of each algorithm for simple 2D flows, to the challenges one faces when using a given algorithm for more complex simulations. The key is to explore the best use of data structure and cache memory. Two basic data structures will be discussed and the importance of effective data storage to maximize a CPU's cache will be addressed. The performance of a 3D turbulent channel flow simulation using these different algorithms and data structures will be compared along with important hardware related issues.

  6. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  7. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  8. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  9. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  10. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  11. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  12. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different

  13. Optimizing the business and IT relationship--a structured approach to implementing a business relationship management framework.

    PubMed

    Mohrmann, Gregg; Kraatz, Drew; Sessa, Bonnie

    2009-01-01

    The relationship between the business and the IT organization is an area where many healthcare providers experience challenges. IT is often perceived as a service provider rather than a partner in delivering quality patient care. Organizations are finding that building a stronger partnership between business and IT leads to increased understanding and appreciation of the technology, process changes and services that can enhance the delivery of care and maximize organizational success. This article will provide a detailed description of valuable techniques for optimizing the healthcare organization's business and IT relationship; considerations on how to implement those techniques; and a description of the key benefits an organization should realize. Using a case study of a healthcare provider that leveraged these techniques, the article will show how an organization can promote this paradigm shift and create a tighter integration between the business and IT.

  14. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  15. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits. PMID:26227212

  16. Analysis, optimization, and implementation of a hybrid DS/FFH spread-spectrum technique for smart grid communications

    SciTech Connect

    Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; Kuruganti, Teja; Smith, Stephen F.; Djouadi, Seddik M.

    2015-03-12

    In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. In this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.

  17. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  18. Compiling Planning into Scheduling: A Sketch

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Crawford, James M.; Smith, David E.

    2004-01-01

    Although there are many approaches for compiling a planning problem into a static CSP or a scheduling problem, current approaches essentially preserve the structure of the planning problem in the encoding. In this pape: we present a fundamentally different encoding that more accurately resembles a scheduling problem. We sketch the approach and argue, based on an example, that it is possible to automate the generation of such an encoding for problems with certain properties and thus produce a compiler of planning into scheduling problems. Furthermore we argue that many NASA problems exhibit these properties and that such a compiler would provide benefits to both theory and practice.

  19. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  20. Automating Visualization Service Generation with the WATT Compiler

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web

  1. A Compilation of Internship Reports - 2012

    SciTech Connect

    Stegman M.; Morris, M.; Blackburn, N.

    2012-08-08

    This compilation documents all research project undertaken by the 2012 summer Department of Energy - Workforce Development for Teachers and Scientists interns during their internship program at Brookhaven National Laboratory.

  2. Extension of Alvis compiler front-end

    NASA Astrophysics Data System (ADS)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr

    2015-12-01

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters' types and operations on them. Thanks to the compiler's modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  3. Electronic circuits for communications systems: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The compilation of electronic circuits for communications systems is divided into thirteen basic categories, each representing an area of circuit design and application. The compilation items are moderately complex and, as such, would appeal to the applications engineer. However, the rationale for the selection criteria was tailored so that the circuits would reflect fundamental design principles and applications, with an additional requirement for simplicity whenever possible.

  4. Analysis, optimization, and implementation of a hybrid DS/FFH spread-spectrum technique for smart grid communications

    DOE PAGES

    Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; Kuruganti, Teja; Smith, Stephen F.; Djouadi, Seddik M.

    2015-03-12

    In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less

  5. Implementation of CFD modeling in the performance assessment and optimization of secondary clarifiers: the PVSC case study.

    PubMed

    Xanthos, S; Ramalingam, K; Lipke, S; McKenna, B; Fillos, J

    2013-01-01

    The water industry and especially the wastewater treatment sector has come under steadily increasing pressure to optimize their existing and new facilities to meet their discharge limits and reduce overall cost. Gravity separation of solids, producing clarified overflow and thickened solids underflow has long been one of the principal separation processes used in treating secondary effluent. Final settling tanks (FSTs) are a central link in the treatment process and often times act as the limiting step to the maximum solids handling capacity when high throughput requirements need to be met. The Passaic Valley Sewerage Commission (PVSC) is interested in using a computational fluid dynamics (CFD) modeling approach to explore any further FST retrofit alternatives to sustain significantly higher plant influent flows, especially under wet weather conditions. In detail there is an interest in modifying and/or upgrading/optimizing the existing FSTs to handle flows in the range of 280-720 million gallons per day (MGD) (12.25-31.55 m(3)/s) in compliance with the plant's effluent discharge limits for total suspended solids (TSS). The CFD model development for this specific plant will be discussed, 2D and 3D simulation results will be presented and initial results of a sensitivity study between two FST effluent weir structure designs will be reviewed at a flow of 550 MGD (∼24 m(3)/s) and 1,800 mg/L MLSS (mixed liquor suspended solids). The latter will provide useful information in determining whether the existing retrofit of one of the FSTs would enable compliance under wet weather conditions and warrants further consideration for implementing it in the remaining FSTs.

  6. Implementation of CFD modeling in the performance assessment and optimization of secondary clarifiers: the PVSC case study.

    PubMed

    Xanthos, S; Ramalingam, K; Lipke, S; McKenna, B; Fillos, J

    2013-01-01

    The water industry and especially the wastewater treatment sector has come under steadily increasing pressure to optimize their existing and new facilities to meet their discharge limits and reduce overall cost. Gravity separation of solids, producing clarified overflow and thickened solids underflow has long been one of the principal separation processes used in treating secondary effluent. Final settling tanks (FSTs) are a central link in the treatment process and often times act as the limiting step to the maximum solids handling capacity when high throughput requirements need to be met. The Passaic Valley Sewerage Commission (PVSC) is interested in using a computational fluid dynamics (CFD) modeling approach to explore any further FST retrofit alternatives to sustain significantly higher plant influent flows, especially under wet weather conditions. In detail there is an interest in modifying and/or upgrading/optimizing the existing FSTs to handle flows in the range of 280-720 million gallons per day (MGD) (12.25-31.55 m(3)/s) in compliance with the plant's effluent discharge limits for total suspended solids (TSS). The CFD model development for this specific plant will be discussed, 2D and 3D simulation results will be presented and initial results of a sensitivity study between two FST effluent weir structure designs will be reviewed at a flow of 550 MGD (∼24 m(3)/s) and 1,800 mg/L MLSS (mixed liquor suspended solids). The latter will provide useful information in determining whether the existing retrofit of one of the FSTs would enable compliance under wet weather conditions and warrants further consideration for implementing it in the remaining FSTs. PMID:24225088

  7. Implementation of an optimal stomatal conductance model in the Australian Community Climate Earth Systems Simulator (ACCESS1.3b)

    NASA Astrophysics Data System (ADS)

    Kala, J.; De Kauwe, M. G.; Pitman, A. J.; Lorenz, R.; Medlyn, B. E.; Wang, Y.-P.; Lin, Y.-S.; Abramowitz, G.

    2015-07-01

    We implement a new stomatal conductance model, based on the optimality approach, within the Community Atmosphere Biosphere Land Exchange (CABLE) land surface model. Coupled land-atmosphere simulations are then performed using CABLE within the Australian Community Climate and Earth Systems Simulator (ACCESS) with prescribed sea surface temperatures. As in most land surface models, the default stomatal conductance scheme only accounts for differences in model parameters in relation to the photosynthetic pathway, but not in relation to plant functional types. The new scheme allows model parameters to vary by plant functional type, based on a global synthesis of observations of stomatal conductance under different climate regimes over a wide range of species. We show that the new scheme reduces the latent heat flux from the land surface over the boreal forests during the Northern Hemisphere summer by 0.5 to 1.0 mm day-1. This leads to warmer daily maximum and minimum temperatures by up to 1.0 °C and warmer extreme maximum temperatures by up to 1.5 °C. These changes generally improve the climate model's climatology and improve existing biases by 10-20 %. The change in the surface energy balance also affects net primary productivity and the terrestrial carbon balance. We conclude that the improvements in the global climate model which result from the new stomatal scheme, constrained by a global synthesis of experimental data, provide a valuable advance in the long-term development of the ACCESS modelling system.

  8. System-on-chip architecture and validation for real-time transceiver optimization: APC implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Suarez, Hernan; Zhang, Yan R.

    2015-05-01

    New radar applications need to perform complex algorithms and process large quantity of data to generate useful information for the users. This situation has motivated the search for better processing solutions that include low power high-performance processors, efficient algorithms, and high-speed interfaces. In this work, hardware implementation of adaptive pulse compression for real-time transceiver optimization are presented, they are based on a System-on-Chip architecture for Xilinx devices. This study also evaluates the performance of dedicated coprocessor as hardware accelerator units to speed up and improve the computation of computing-intensive tasks such matrix multiplication and matrix inversion which are essential units to solve the covariance matrix. The tradeoffs between latency and hardware utilization are also presented. Moreover, the system architecture takes advantage of the embedded processor, which is interconnected with the logic resources through the high performance AXI buses, to perform floating-point operations, control the processing blocks, and communicate with external PC through a customized software interface. The overall system functionality is demonstrated and tested for real-time operations using a Ku-band tested together with a low-cost channel emulator for different types of waveforms.

  9. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  10. Compiled MPI: Cost-Effective Exascale Applications Development

    SciTech Connect

    Bronevetsky, G; Quinlan, D; Lumsdaine, A; Hoefler, T

    2012-04-10

    's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.

  11. Compiling high-level languages for configurable computers: applying lessons from heterogeneous processing

    NASA Astrophysics Data System (ADS)

    Weaver, Glen E.; Weems, Charles C.; McKinley, Kathryn S.

    1996-10-01

    Configurable systems offer increased performance by providing hardware that matches the computational structure of a problem. This hardware is currently programmed with CAD tools and explicit library calls. To attain widespread acceptance, configurable computing must become transparently accessible from high-level programming languages, but the changeable nature of the target hardware presents a major challenge to traditional compiler technology. A compiler for a configurable computer should optimize the use of functions embedded in hardware and schedule hardware reconfigurations. The hurdles to be overcome in achieving this capability are similar in some ways to those facing compilation for heterogeneous systems. For example, current traditional compilers have neither an interface to accept new primitive operators, nor a mechanism for applying optimizations to new operators. We are building a compiler for heterogeneous computing, called Scale, which replaces the traditional monolithic compiler architecture with a flexible framework. Scale has three main parts: translation director, compilation library, and a persistent store which holds our intermediate representation as well as other data structures. The translation director exploits the framework's flexibility by using architectural information to build a plan to direct each compilation. The translation library serves as a toolkit for use by the translation director. Our compiler intermediate representation, Score, facilities the addition of new IR nodes by distinguishing features used in defining nodes from properties on which transformations depend. In this paper, we present an overview of the scale architecture and its capabilities for dealing with heterogeneity, followed by a discussion of how those capabilities apply to problems in configurable computing. We then address aspects of configurable computing that are likely to require extensions to our approach and propose some extensions.

  12. Adoption and implementation of a computer-delivered HIV/STD risk-reduction intervention for African American adolescent females seeking services at county health departments: implementation optimization is urgently needed.

    PubMed

    DiClemente, Ralph J; Bradley, Erin; Davis, Teaniese L; Brown, Jennifer L; Ukuku, Mary; Sales, Jessica M; Rose, Eve S; Wingood, Gina M

    2013-06-01

    Although group-delivered HIV/sexually transmitted disease (STD) risk-reduction interventions for African American adolescent females have proven efficacious, they require significant financial and staffing resources to implement and may not be feasible in personnel- and resource-constrained public health clinics. We conducted a study assessing adoption and implementation of an evidence-based HIV/STD risk-reduction intervention that was translated from a group-delivered modality to a computer-delivered modality to facilitate use in county public health departments. Usage of the computer-delivered intervention was low across 8 participating public health clinics. Further investigation is needed to optimize implementation by identifying, understanding, and surmounting barriers that hamper timely and efficient implementation of technology-delivered HIV/STD risk-reduction interventions in county public health clinics.

  13. Extension of Alvis compiler front-end

    SciTech Connect

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr E-mail: mszpyrka@agh.edu.pl

    2015-12-31

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  14. Compilation of data on elementary particles

    SciTech Connect

    Trippe, T.G.

    1984-09-01

    The most widely used data compilation in the field of elementary particle physics is the Review of Particle Properties. The origin, development and current state of this compilation are described with emphasis on the features which have contributed to its success: active involvement of particle physicists; critical evaluation and review of the data; completeness of coverage; regular distribution of reliable summaries including a pocket edition; heavy involvement of expert consultants; and international collaboration. The current state of the Review and new developments such as providing interactive access to the Review's database are described. Problems and solutions related to maintaining a strong and supportive relationship between compilation groups and the researchers who produce and use the data are discussed.

  15. A small evaluation suite for Ada compilers

    NASA Technical Reports Server (NTRS)

    Wilke, Randy; Roy, Daniel M.

    1986-01-01

    After completing a small Ada pilot project (OCC simulator) for the Multi Satellite Operations Control Center (MSOCC) at Goddard last year, the use of Ada to develop OCCs was recommended. To help MSOCC transition toward Ada, a suite of about 100 evaluation programs was developed which can be used to assess Ada compilers. These programs compare the overall quality of the compilation system, compare the relative efficiencies of the compilers and the environments in which they work, and compare the size and execution speed of generated machine code. Another goal of the benchmark software was to provide MSOCC system developers with rough timing estimates for the purpose of predicting performance of future systems written in Ada.

  16. Machine tools and fixtures: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    As part of NASA's Technology Utilizations Program, a compilation was made of technological developments regarding machine tools, jigs, and fixtures that have been produced, modified, or adapted to meet requirements of the aerospace program. The compilation is divided into three sections that include: (1) a variety of machine tool applications that offer easier and more efficient production techniques; (2) methods, techniques, and hardware that aid in the setup, alignment, and control of machines and machine tools to further quality assurance in finished products: and (3) jigs, fixtures, and adapters that are ancillary to basic machine tools and aid in realizing their greatest potential.

  17. COMPILATION OF CURRENT HIGH ENERGY PHYSICS EXPERIMENTS

    SciTech Connect

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.; Horne, C.P.; Hutchinson, M.S.; Rittenberg, A.; Trippe, T.G.; Yost, G.P.; Addis, L.; Ward, C.E.W.; Baggett, N.; Goldschmidt-Clermong, Y.; Joos, P.; Gelfand, N.; Oyanagi, Y.; Grudtsin, S.N.; Ryabov, Yu.G.

    1981-05-01

    This is the fourth edition of our compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about April 1981, and (2) had not completed taking of data by 1 January 1977. We emphasize that only approved experiments are included.

  18. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  19. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is often complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.

  20. Proving Correctness for Pointer Programs in a Verifying Compiler

    NASA Technical Reports Server (NTRS)

    Kulczycki, Gregory; Singh, Amrinder

    2008-01-01

    This research describes a component-based approach to proving the correctness of programs involving pointer behavior. The approach supports modular reasoning and is designed to be used within the larger context of a verifying compiler. The approach consists of two parts. When a system component requires the direct manipulation of pointer operations in its implementation, we implement it using a built-in component specifically designed to capture the functional and performance behavior of pointers. When a system component requires pointer behavior via a linked data structure, we ensure that the complexities of the pointer operations are encapsulated within the data structure and are hidden to the client component. In this way, programs that rely on pointers can be verified modularly, without requiring special rules for pointers. The ultimate objective of a verifying compiler is to prove-with as little human intervention as possible-that proposed program code is correct with respect to a full behavioral specification. Full verification for software is especially important for an agency like NASA that is routinely involved in the development of mission critical systems.

  1. Electronic switches and control circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The innovations in this updated series of compilations dealing with electronic technology represents a carefully selected collection of items on electronic switches and control circuits. Most of the items are based on well-known circuit design concepts that have been simplified or refined to meet NASA's demanding requirement for reliability, simplicity, fail-safe characteristics, and the capability of withstanding environmental extremes.

  2. How to compile a curriculum vitae.

    PubMed

    Fish, J

    The previous article in this series tackled the best way to apply for a job. Increasingly, employers request a curriculum vitae as part of the application process. This article aims to assist you in compiling a c.v. by discussing its essential components and content.

  3. Compilation of information on melter modeling

    SciTech Connect

    Eyler, L.L.

    1996-03-01

    The objective of the task described in this report is to compile information on modeling capabilities for the High-Temperature Melter and the Cold Crucible Melter and issue a modeling capabilities letter report summarizing existing modeling capabilities. The report is to include strategy recommendations for future modeling efforts to support the High Level Waste (HLW) melter development.

  4. Heat Transfer and Thermodynamics: a Compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Studies include theories and mechanical considerations in the transfer of heat and the thermodynamic properties of matter and the causes and effects of certain interactions.

  5. Safety and maintenance engineering: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Safety of personnel engaged in the handling of hazardous materials and equipment, protection of equipment from fire, high wind, or careless handling by personnel, and techniques for the maintenance of operating equipment are reported.

  6. Multiple Literacies. A Compilation for Adult Educators.

    ERIC Educational Resources Information Center

    Hull, Glynda A.; Mikulecky, Larry; St. Clair, Ralf; Kerka, Sandra

    Recent developments have broadened the definition of literacy to multiple literacies--bodies of knowledge, skills, and social practices with which we understand, interpret, and use the symbol systems of our culture. This compilation looks at the various literacies as the application of critical abilities to several domains of importance to adult…

  7. The dc power circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A compilation of reports concerning power circuits is presented for the dissemination of aerospace information to the general public as part of the NASA Technology Utilization Program. The descriptions for the electronic circuits are grouped as follows: dc power supplies, power converters, current-voltage power supply regulators, overload protection circuits, and dc constant current power supplies.

  8. Runtime support and compilation methods for user-specified data distributions

    NASA Technical Reports Server (NTRS)

    Ponnusamy, Ravi; Saltz, Joel; Choudhury, Alok; Hwang, Yuan-Shin; Fox, Geoffrey

    1993-01-01

    This paper describes two new ideas by which an HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow use of program arrays to describe graph connectivity, spatial location of array elements, and computational load. The second mechanism is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed information from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance results for these mechanisms from a Fortran 90D compiler implementation.

  9. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  10. System, apparatus and methods to implement high-speed network analyzers

    SciTech Connect

    Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E

    2015-11-10

    Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.

  11. Compilation of tRNA sequences.

    PubMed

    Sprinzl, M; Grueter, F; Spelzhaus, A; Gauss, D H

    1980-01-11

    This compilation presents in a small space the tRNA sequences so far published. The numbering of tRNAPhe from yeast is used following the rules proposed by the participants of the Cold Spring Harbor Meeting on tRNA 1978 (1,2;Fig. 1). This numbering allows comparisons with the three dimensional structure of tRNAPhe. The secondary structure of tRNAs is indicated by specific underlining. In the primary structure a nucleoside followed by a nucleoside in brackets or a modification in brackets denotes that both types of nucleosides can occupy this position. Part of a sequence in brackets designates a piece of sequence not unambiguosly analyzed. Rare nucleosides are named according to the IUPACIUB rules (for complicated rare nucleosides and their identification see Table 1); those with lengthy names are given with the prefix x and specified in the footnotes. Footnotes are numbered according to the coordinates of the corresponding nucleoside and are indicated in the sequence by an asterisk. The references are restricted to the citation of the latest publication in those cases where several papers deal with one sequence. For additional information the reader is referred either to the original literature or to other tRNA sequence compilations (3-7). Mutant tRNAs are dealt with in a compilation by J. Celis (8). The compilers would welcome any information by the readers regarding missing material or erroneous presentation. On the basis of this numbering system computer printed compilations of tRNA sequences in a linear form and in cloverleaf form are in preparation. PMID:6986608

  12. Compilation of tRNA sequences.

    PubMed

    Gauss, D H; Grüter, F; Sprinzl, M

    1979-01-01

    This compilation presents in a small space the tRNA sequences so far published in order to enable rapid orientation and comparison. The numbering of tRNAPhe from yeast is used as has been done earlier (1) but following the rules proposed by the participants of the Cold Spring Harbor Meeting on tRNA 1978 (2) (Fig. 1). This numbering allows comparisons with the three dimensional structure of tRNAPhe, the only structure known from X-ray analysis. The secondary structure of tRNAs is indicated by specific underlining. In the primary structure a nucleoside followed by a nucleoside in brackets or a modification in brackets denotes that both types of nucleosides can occupy this position. Part of a sequence in brackets designates a piece of sequence not unambiguously analyzed. Rare nucleosides are named according to the IUPAC-IUB rules (for some more complicated rare nucleosides and their identification see Table 1); those with lengthy names are given with the prefix x and specified in the footnotes. Footnotes are numbered according to the coordinates of the corresponding nucleoside and are indicated in the sequence by an asterisk. The references are restricted to the citation of the latest publication in those cases where several papers deal with one sequence. For additional information the reader is referred either to the original literature or to other tRNA sequence compilations (3--7). Mutant tRNAs are dealt with in a separate compilation prepared by J. Celis (see below). The compilers would welcome any information by the readers regarding missing material or erroneous presentation. On the basis of this numbering system computer printed compilations of tRNA sequences in a linear form and in cloverleaf form are in preparation. PMID:424282

  13. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  14. Implementation of an optimal stomatal conductance scheme in the Australian Community Climate Earth Systems Simulator (ACCESS1.3b)

    NASA Astrophysics Data System (ADS)

    Kala, J.; De Kauwe, M. G.; Pitman, A. J.; Lorenz, R.; Medlyn, B. E.; Wang, Y.-P.; Lin, Y.-S.; Abramowitz, G.

    2015-12-01

    We implement a new stomatal conductance scheme, based on the optimality approach, within the Community Atmosphere Biosphere Land Exchange (CABLEv2.0.1) land surface model. Coupled land-atmosphere simulations are then performed using CABLEv2.0.1 within the Australian Community Climate and Earth Systems Simulator (ACCESSv1.3b) with prescribed sea surface temperatures. As in most land surface models, the default stomatal conductance scheme only accounts for differences in model parameters in relation to the photosynthetic pathway but not in relation to plant functional types. The new scheme allows model parameters to vary by plant functional type, based on a global synthesis of observations of stomatal conductance under different climate regimes over a wide range of species. We show that the new scheme reduces the latent heat flux from the land surface over the boreal forests during the Northern Hemisphere summer by 0.5-1.0 mm day-1. This leads to warmer daily maximum and minimum temperatures by up to 1.0 °C and warmer extreme maximum temperatures by up to 1.5 °C. These changes generally improve the climate model's climatology of warm extremes and improve existing biases by 10-20 %. The bias in minimum temperatures is however degraded but, overall, this is outweighed by the improvement in maximum temperatures as there is a net improvement in the diurnal temperature range in this region. In other regions such as parts of South and North America where ACCESSv1.3b has known large positive biases in both maximum and minimum temperatures (~ 5 to 10 °C), the new scheme degrades this bias by up to 1 °C. We conclude that, although several large biases remain in ACCESSv1.3b for temperature extremes, the improvements in the global climate model over large parts of the boreal forests during the Northern Hemisphere summer which result from the new stomatal scheme, constrained by a global synthesis of experimental data, provide a valuable advance in the long-term development

  15. A quantum logic network for implementing optimal symmetric universal and phase-covariant telecloning of a bipartite entangled state

    NASA Astrophysics Data System (ADS)

    Meng, Fanyu; Zhu, Aidong

    2008-10-01

    A quantum logic network to implement quantum telecloning is presented in this paper. The network includes two parts: the first part is used to create the telecloning channel and the second part to teleport the state. It can be used not only to implement universal telecloning for a bipartite entangled state which is completely unknown, but also to implement the phase-covariant telecloning for one that is partially known. Furthermore, the network can also be used to construct a tele-triplicator. It can easily be implemented in experiment because only single- and two-qubit operations are used in the network.

  16. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  17. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  18. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  19. Compilation of DNA sequences of Escherichia coli

    PubMed Central

    Kröger, Manfred

    1989-01-01

    We have compiled the DNA sequence data for E.coli K12 available from the GENBANK and EMBO databases and over a period of several years independently from the literature. We have introduced all available genetic map data and have arranged the sequences accordingly. As far as possible the overlaps are deleted and a total of 940,449 individual bp is found to be determined till the beginning of 1989. This corresponds to a total of 19.92% of the entire E.coli chromosome consisting of about 4,720 kbp. This number may actually be higher by some extra 2% derived from the sequence of lysogenic bacteriophage lambda and the various insertion sequences. This compilation may be available in machine readable form from one of the international databanks in some future. PMID:2654890

  20. Compilation of requests for nuclear data

    SciTech Connect

    Not Available

    1981-03-01

    A request list for nuclear data which was produced from a computerized data file by the National Nuclear Data Center is presented. The request list is given by target nucleus (isotope) and then reaction type. The purpose of the compilation is to summarize the current needs of US Nuclear Energy programs and other applied technologies for nuclear data. Requesters are identified by laboratory, last name, and sponsoring US government agency. (WHK)

  1. 1991 OCRWM bulletin compilation and index

    SciTech Connect

    1992-05-01

    The OCRWM Bulletin is published by the Department of Energy, Office of Civilian Radioactive Waste Management, to provide current information about the national program for managing spent fuel and high-level radioactive waste. The document is a compilation of issues from the 1991 calendar year. A table of contents and an index have been provided to reference information contained in this year`s Bulletins.

  2. Nuclear Data Compilation for Beta Decay Isotope

    NASA Astrophysics Data System (ADS)

    Olmsted, Susan; Kelley, John; Sheu, Grace

    2015-10-01

    The Triangle Universities Nuclear Laboratory nuclear data group works with the Nuclear Structure and Decay Data network to compile and evaluate data for use in nuclear physics research and applied technologies. Teams of data evaluators search through the literature and examine the experimental values for various nuclear structure parameters. The present activity focused on reviewing all available literature to determine the most accurate half-life values for beta unstable isotopes in the A = 3-20 range. This analysis will eventually be folded into the ENSDF (Evaluated Nuclear Structure Data File). By surveying an accumulated compilation of reference articles, we gathered all of the experimental half-life values for the beta decay nuclides. We then used the Visual Averaging Library, a data evaluation software package, to find half-life values using several different averaging techniques. Ultimately, we found recommended half-life values for most of the mentioned beta decay isotopes, and updated web pages on the TUNL webpage to reflect these evaluations. To summarize, we compiled and evaluated literature reports on experimentally determined half-lives. Our findings have been used to update information given on the TUNL Nuclear Data Evaluation group website. This was an REU project with Triangle Universities Nuclear Laboratory.

  3. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  4. A compiler and validator for flight operations on NASA space missions

    NASA Astrophysics Data System (ADS)

    Fonte, Sergio; Politi, Romolo; Capria, Maria Teresa; Giardino, Marco; De Sanctis, Maria Cristina

    2016-07-01

    In NASA missions the management and the programming of the flight systems is performed by a specific scripting language, the SASF (Spacecraft Activity Sequence File). In order to perform a check on the syntax and grammar it is necessary a compiler that stress the errors (eventually) found in the sequence file produced for an instrument on board the flight system. In our experience on Dawn mission, we developed VIRV (VIR Validator), a tool that performs checks on the syntax and grammar of SASF, runs a simulations of VIR acquisitions and eventually finds violation of the flight rules of the sequences produced. The project of a SASF compiler (SSC - Spacecraft Sequence Compiler) is ready to have a new implementation: the generalization for different NASA mission. In fact, VIRV is a compiler for a dialect of SASF; it includes VIR commands as part of SASF language. Our goal is to produce a general compiler for the SASF, in which every instrument has a library to be introduced into the compiler. The SSC can analyze a SASF, produce a log of events, perform a simulation of the instrument acquisition and check the flight rules for the instrument selected. The output of the program can be produced in GRASS GIS format and may help the operator to analyze the geometry of the acquisition.

  5. Current status of the HAL/S compiler on the Modcomp classic 7870 computer

    NASA Technical Reports Server (NTRS)

    Lytle, P. J.

    1981-01-01

    A brief history of the HAL/S language, including the experience of other users of the language at the Jet Propulsion Laboratory is presented. The current status of the compiler, as implemented on the Modcomp 7870 Classi computer, and future applications in the Deep Space Network (DSN) are discussed. The primary applications in the DSN will be in the Mark IVA network.

  6. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  7. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB.

    PubMed

    Biyikli, Emre; To, Albert C

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  8. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB.

    PubMed

    Biyikli, Emre; To, Albert C

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org.

  9. Digital circuits for computer applications: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The innovations in this updated series of compilations dealing with electronic technology represent a carefully selected collection of digital circuits which have direct application in computer oriented systems. In general, the circuits have been selected as representative items of each section and have been included on their merits of having universal applications in digital computers and digital data processing systems. As such, they should have wide appeal to the professional engineer and scientist who encounter the fundamentals of digital techniques in their daily activities. The circuits are grouped as digital logic circuits, analog to digital converters, and counters and shift registers.

  10. Dual compile strategy for parallel heterogeneous execution.

    SciTech Connect

    Smith, Tyler Barratt; Perry, James Thomas

    2012-06-01

    The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work at the same time.

  11. HAL/S-360 compiler system specification

    NASA Technical Reports Server (NTRS)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  12. Non-vitamin K antagonist oral anticoagulants and atrial fibrillation guidelines in practice: barriers to and strategies for optimal implementation

    PubMed Central

    Camm, A. John; Pinto, Fausto J.; Hankey, Graeme J.; Andreotti, Felicita; Hobbs, F.D. Richard

    2015-01-01

    Stroke is a leading cause of morbidity and mortality worldwide. Atrial fibrillation (AF) is an independent risk factor for stroke, increasing the risk five-fold. Strokes in patients with AF are more likely than other embolic strokes to be fatal or cause severe disability and are associated with higher healthcare costs, but they are also preventable. Current guidelines recommend that all patients with AF who are at risk of stroke should receive anticoagulation. However, despite this guidance, registry data indicate that anticoagulation is still widely underused. With a focus on the 2012 update of the European Society of Cardiology (ESC) guidelines for the management of AF, the Action for Stroke Prevention alliance writing group have identified key reasons for the suboptimal implementation of the guidelines at a global, regional, and local level, with an emphasis on access restrictions to guideline-recommended therapies. Following identification of these barriers, the group has developed an expert consensus on strategies to augment the implementation of current guidelines, including practical, educational, and access-related measures. The potential impact of healthcare quality measures for stroke prevention on guideline implementation is also explored. By providing practical guidance on how to improve implementation of the ESC guidelines, or region-specific modifications of these guidelines, the aim is to reduce the potentially devastating impact that stroke can have on patients, their families and their carers. PMID:26116685

  13. 6 CFR 9.51 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Semi-annual compilation. 9.51 Section 9.51 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY RESTRICTIONS UPON LOBBYING Agency Reports § 9.51 Semi-annual compilation. (a) The head of each agency shall collect and compile...

  14. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  15. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Semi-annual compilation. 28.600 Section 28.600... REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each... compilations to the Secretary of the Senate and the Clerk of the House of Representatives. (h) Agencies...

  16. 6 CFR 9.51 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Semi-annual compilation. 9.51 Section 9.51 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY RESTRICTIONS UPON LOBBYING Agency Reports § 9.51 Semi-annual compilation. (a) The head of each agency shall collect and compile...

  17. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  18. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  19. On-line re-optimization of prostate IMRT plan for adaptive radiation therapy: A feasibility study and implementation

    NASA Astrophysics Data System (ADS)

    Thongphiew, Danthai

    Prostate cancer is a disease that affected approximately 200,000 men in United States in 2006. Radiation therapy is a non invasive treatment option for this disease and is highly effective. The goal of radiation therapy is to deliver the prescription dose to the tumor (prostate) while sparing the surrounding healthy organs (e.g. bladder, rectum, and femoral heads). One limitation of the radiation therapy is organ position and shape variation from day to day. These variations could be as large as half inch. The conventional solution to this problem is to include some margins surrounding the target when plan the treatment. The development of image guided radiation therapy technique allows in-room correction which potentially eliminates the patient setup error however the uncertainty due to organ deformation still remains. Performing a plan re-optimization will take about half hour which is infeasible to perform an online correction. A technique of performing online re-optimization of intensity modulated radiation therapy is developed for adaptive radiation therapy of prostate cancer. The technique is capable of correction both organ positioning and shape changes within a few minutes. The proposed technique involves (1) 3D on-board imaging of daily anatomy, (2) registering the daily images with original planning CT images and mapping the original dose distribution to the daily anatomy, (3) real time re-optimization of the plan. Finally the leaf sequences are calculated for the treatment delivery. The feasibility of this online adaptive radiation therapy scheme was evaluated by clinical cases. The results demonstrate that it is feasible to perform online re-optimization of the original plan when large position or shape variation occurs.

  20. The Union3 Supernova Ia Compilation

    NASA Astrophysics Data System (ADS)

    Rubin, David; Aldering, Greg Scott; Amanullah, Rahman; Barbary, Kyle H.; Bruce, Adam; Chappell, Greta; Currie, Miles; Dawson, Kyle S.; Deustua, Susana E.; Doi, Mamoru; Fakhouri, Hannah; Fruchter, Andrew S.; Gibbons, Rachel A.; Goobar, Ariel; Hsiao, Eric; Huang, Xiaosheng; Ihara, Yutaka; Kim, Alex G.; Knop, Robert A.; Kowalski, Marek; Krechmer, Evan; Lidman, Chris; Linder, Eric; Meyers, Joshua; Morokuma, Tomoki; Nordin, Jakob; Perlmutter, Saul; Ripoche, Pascal; Ruiz-Lapuente, Pilar; Rykoff, Eli S.; Saunders, Clare; Spadafora, Anthony L.; Suzuki, Nao; Takanashi, Naohiro; Yasuda, Naoki; Supernova Cosmology Project

    2016-01-01

    High-redshift supernovae observed with the Hubble Space Telescope (HST) are crucial for constraining any time variation in dark energy. In a forthcoming paper (Rubin+, in prep), we will present a cosmological analysis incorporating existing supernovae with improved calibrations, and new HST-observed supernovae (six above z=1). We combine these data with current literature data, and fit them using SALT2-4 to create the Union3 Supernova compilation. We build on the Unified Inference for Type Ia cosmologY (UNITY) framework (Rubin+ 2015b), incorporating non-linear light-curve width and color relations, a model for unexplained dispersion, an outlier model, and a redshift-dependent host-mass correction.

  1. An Innovative Compiler For Programming And Designing Real-Time Signal Processors

    NASA Astrophysics Data System (ADS)

    Petruschka, Orni; Torng, H. C.

    1986-04-01

    Real time signal processing tasks impose stringent requirements on computing systems. One approach to satisfying these demands is to employ intelligently interconnected multiple arithmetic units, such as multipliers, adders, logic units and others, to implement concurrent computations. Two problems emerge: 1) Programming: Programs with wide instruction words have to be developed to exercise the multiple arithmetic units fully and efficiently to meet the real-time processing loads; 2) Design: With a given set of real-time signal processing tasks, design procedures are needed to specify multiple arithmetic units and their interconnection schemes for the processor. This paper presents a compiler which provides a solution to the programming and design problems. The compiler that has been developed translates blocks of RISC-like instructions into programs of wide microinstructions; each of these microinstructions initiates many concurrently executable operations. In so doing, we seek to achieve the maximum utilization of execution resources and to complete processing tasks in minimum time. The compiler is based on an innovative "Dispatch Stack" concept, and has been applied to program Floating Point System(FPS) processors; the resulting program for computing inner-product and other signal processing tasks are as good as those obtained by laborious hand-compilation. We will then show that the compiler developed for programming can be used advantageously to design real-time signal processing systems with multiple arithmetic units.

  2. Roughness parameter optimization using Land Parameter Retrieval Model and Soil Moisture Deficit: Implementation using SMOS brightness temperatures

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; O'Neill, Peggy; Han, Dawei; Rico-Ramirez, Miguel A.; Petropoulos, George P.; Islam, Tanvir; Gupta, Manika

    2015-04-01

    Roughness parameterization is necessary for nearly all soil moisture retrieval algorithms such as single or dual channel algorithms, L-band Microwave Emission of Biosphere (LMEB), Land Parameter Retrieval Model (LPRM), etc. At present, roughness parameters can be obtained either by field experiments, although obtaining field measurements all over the globe is nearly impossible, or by using a land cover-based look up table, which is not always accurate everywhere for individual fields. From a catalogue of models available in the technical literature domain, the LPRM model was used here because of its robust nature and applicability to a wide range of frequencies. LPRM needs several parameters for soil moisture retrieval -- in particular, roughness parameters (h and Q) are important for calculating reflectivity. In this study, the h and Q parameters are optimized using the soil moisture deficit (SMD) estimated from the probability distributed model (PDM) and Soil Moisture and Ocean Salinity (SMOS) brightness temperatures following the Levenberg-Marquardt (LM) algorithm over the Brue catchment, Southwest of England, U.K.. The catchment is predominantly a pasture land with moderate topography. The PDM-based SMD is used as it is calibrated and validated using locally available ground-based information, suitable for large scale areas such as catchments. The optimal h and Q parameters are determined by maximizing the correlation between SMD and LPRM retrieved soil moisture. After optimization the values of h and Q have been found to be 0.32 and 0.15, respectively. For testing the usefulness of the estimated roughness parameters, a separate set of SMOS datasets are taken into account for soil moisture retrieval using the LPRM model and optimized roughness parameters. The overall analysis indicates a satisfactory result when compared against the SMD information. This work provides quantitative values of roughness parameters suitable for large scale applications. The

  3. Compilation of requests for nuclear data

    SciTech Connect

    Weston, L.W.; Larson, D.C.

    1993-02-01

    This compilation represents the current needs for nuclear data measurements and evaluations as expressed by interested fission and fusion reactor designers, medical users of nuclear data, nuclear data evaluators, CSEWG members and other interested parties. The requests and justifications are reviewed by the Data Request and Status Subcommittee of CSEWG as well as most of the general CSEWG membership. The basic format and computer programs for the Request List were produced by the National Nuclear Data Center (NNDC) at Brookhaven National Laboratory. The NNDC produced the Request List for many years. The Request List is compiled from a computerized data file. Each request has a unique isotope, reaction type, requestor and identifying number. The first two digits of the identifying number are the year in which the request was initiated. Every effort has been made to restrict the notations to those used in common nuclear physics textbooks. Most requests are for individual isotopes as are most ENDF evaluations, however, there are some requests for elemental measurements. Each request gives a priority rating which will be discussed in Section 2, the neutron energy range for which the request is made, the accuracy requested in terms of one standard deviation, and the requested energy resolution in terms of one standard deviation. Also given is the requestor with the comments which were furnished with the request. The addresses and telephone numbers of the requestors are given in Appendix 1. ENDF evaluators who may be contacted concerning evaluations are given in Appendix 2. Experimentalists contemplating making one of the requested measurements are encouraged to contact both the requestor and evaluator who may provide valuable information. This is a working document in that it will change with time. New requests or comments may be submitted to the editors or a regular CSEWG member at any time.

  4. ROSE: Compiler Support for Object-Oriented Frameworks

    SciTech Connect

    Qainlant, D.

    1999-11-17

    ROSE is a preprocessor generation tool for the support of compile time performance optimizations in Overture. The Overture framework is an object-oriented environment for solving partial differential equations in two and three space dimensions. It is a collection of C++ libraries that enables the use of finite difference and finite volume methods at a level that hides the details of the associated data structures. Overture can be used to solve problems in complicated, moving geometries using the method of overlapping grids. It has support for grid generation, difference operators, boundary conditions, database access and graphics. In this paper we briefly present Overture, and discuss our approach toward performance within Overture and the A++P++ array class abstractions upon which Overture depends, this work represents some of the newest work in Overture. The results we present show that the abstractions represented within Overture and the A++P++ array class library can be used to obtain application codes with performance equivalent to that of optimized C and Fortran 77. ROSE, the preprocessor generation tool, is general in its application to any object-oriented framework or application and is not specific to Overture.

  5. Development and implementation of optimal filtering in a Virtex FPGA for the upgrade of the ATLAS LAr calorimeter readout

    NASA Astrophysics Data System (ADS)

    Stärz, S.

    2012-12-01

    In the context of upgraded read-out systems for the Liquid-Argon Calorimeters of the ATLAS detector, modified front-end, back-end and trigger electronics are foreseen for operation in the high-luminosity phase of the LHC. Accuracy and efficiency of the energy measurement and reliability of pile-up suppression are substantial when processing the detector raw-data in real-time. Several digital filter algorithms are investigated for their performance to extract energies from incoming detector signals and for the needs of the future trigger system. The implementation of fast, resource economizing, parameter driven filter algorithms in a modern Virtex FPGA is presented.

  6. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  7. Bio-inspired feedback-circuit implementation of discrete, free energy optimizing, winner-take-all computations.

    PubMed

    Genewein, Tim; Braun, Daniel A

    2016-06-01

    Bayesian inference and bounded rational decision-making require the accumulation of evidence or utility, respectively, to transform a prior belief or strategy into a posterior probability distribution over hypotheses or actions. Crucially, this process cannot be simply realized by independent integrators, since the different hypotheses and actions also compete with each other. In continuous time, this competitive integration process can be described by a special case of the replicator equation. Here we investigate simple analog electric circuits that implement the underlying differential equation under the constraint that we only permit a limited set of building blocks that we regard as biologically interpretable, such as capacitors, resistors, voltage-dependent conductances and voltage- or current-controlled current and voltage sources. The appeal of these circuits is that they intrinsically perform normalization without requiring an explicit divisive normalization. However, even in idealized simulations, we find that these circuits are very sensitive to internal noise as they accumulate error over time. We discuss in how far neural circuits could implement these operations that might provide a generic competitive principle underlying both perception and action. PMID:27023096

  8. An optimal scheme for numerical evaluation of Eshelby tensors and its implementation in a MATLAB package for simulating the motion of viscous ellipsoids in slow flows

    NASA Astrophysics Data System (ADS)

    Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.

    2016-11-01

    To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.

  9. Implementation and Optimization of DFT-D/COSab with Respect to Basis Set and Functional: Application to Polar Processes of Furfural Derivatives in Solution.

    PubMed

    Peverati, Roberto; Baldridge, Kim K

    2009-10-13

    The implementation, optimization, and performance of DFT-D, including the effects of solvation, has been tested on applications of polar processes in solution, where dispersion and hydrogen bonding is known to be involved. Solvent effects are included using our ab initio continuum solvation strategy, COSab, a conductor-like continuum solvation model, modified for ab initio in the quantum chemistry program GAMESS. Structure and properties are investigated across various functionals to evaluate their ability to properly model dispersion and solvation effects. The commonly used S22 set with accurate interaction energies of organic complexes has been used for parametrization studies of dispersion parameters and relevant solvation parameters. Dunning's correlation consistent basis sets, cc-pVnZ (n = D, T), are used in the optimization, together with the Grimme B97-D exchange-correlation functional. Both water (ε = 78.4) and ether (ε = 4.33) environments are considered. Optimized semiempirical dispersion correction parameters and solvent extent radii are proposed for several functionals. We find that special parametrization of the semiempirical dispersion correction when used together in the DFT-D/COSab approach is not necessary. The global performance is quite acceptable in terms of chemical accuracy and suggests that this approach is a reliable as well as economical method for evaluation of solvent effects in systems with dispersive interactions. The resulting theory is applied to a group of push-pull pyrrole systems to illustrate the effects of donor/acceptor and solvation on their conformational and energetic properties.

  10. Soil erosion evaluation in a rapidly urbanizing city (Shenzhen, China) and implementation of spatial land-use optimization.

    PubMed

    Zhang, Wenting; Huang, Bo

    2015-03-01

    Soil erosion has become a pressing environmental concern worldwide. In addition to such natural factors as slope, rainfall, vegetation cover, and soil characteristics, land-use changes-a direct reflection of human activities-also exert a huge influence on soil erosion. In recent years, such dramatic changes, in conjunction with the increasing trend toward urbanization worldwide, have led to severe soil erosion. Against this backdrop, geographic information system-assisted research on the effects of land-use changes on soil erosion has become increasingly common, producing a number of meaningful results. In most of these studies, however, even when the spatial and temporal effects of land-use changes are evaluated, knowledge of how the resulting data can be used to formulate sound land-use plans is generally lacking. At the same time, land-use decisions are driven by social, environmental, and economic factors and thus cannot be made solely with the goal of controlling soil erosion. To address these issues, a genetic algorithm (GA)-based multi-objective optimization (MOO) approach has been proposed to find a balance among various land-use objectives, including soil erosion control, to achieve sound land-use plans. GA-based MOO offers decision-makers and land-use planners a set of Pareto-optimal solutions from which to choose. Shenzhen, a fast-developing Chinese city that has long suffered from severe soil erosion, is selected as a case study area to validate the efficacy of the GA-based MOO approach for controlling soil erosion. Based on the MOO results, three multiple land-use objectives are proposed for Shenzhen: (1) to minimize soil erosion, (2) to minimize the incompatibility of neighboring land-use types, and (3) to minimize the cost of changes to the status quo. In addition to these land-use objectives, several constraints are also defined: (1) the provision of sufficient built-up land to accommodate a growing population, (2) restrictions on the development of

  11. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  12. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    SciTech Connect

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing and implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.

  13. Compiling quantum algorithms for architectures with multi-qubit gates

    NASA Astrophysics Data System (ADS)

    Martinez, Esteban A.; Monz, Thomas; Nigg, Daniel; Schindler, Philipp; Blatt, Rainer

    2016-06-01

    In recent years, small-scale quantum information processors have been realized in multiple physical architectures. These systems provide a universal set of gates that allow one to implement any given unitary operation. The decomposition of a particular algorithm into a sequence of these available gates is not unique. Thus, the fidelity of the implementation of an algorithm can be increased by choosing an optimized decomposition into available gates. Here, we present a method to find such a decomposition, where a small-scale ion trap quantum information processor is used as an example. We demonstrate a numerical optimization protocol that minimizes the number of required multi-qubit entangling gates by design. Furthermore, we adapt the method for state preparation, and quantum algorithms including in-sequence measurements.

  14. Implementation and performance of the pseudoknot problem in sisal

    SciTech Connect

    Feo, J.; Ivory, M.

    1994-12-01

    The Pseudoknot Problem is an application from molecular biology that computes all possible three-dimensional structures of one section of a nucleic acid molecule. The problem spans two important application domains: it includes a deterministic, backtracking search algorithm and floating-point intensive computations. Recently, the application has been used to compare and to contrast functional languages. In this paper, we describe a sequential and parallel implementation of the problem in Sisal. We present a method for writing recursive, floating-point intensive applications in Sisal that preserves performance and parallelism. We discuss compiler optimizations, runtime execution, and performance on several multiprocessor systems.

  15. Language abstractions for low level optimization techniques

    NASA Astrophysics Data System (ADS)

    Dévai, Gergely; Gera, Zoltán; Kelemen, Zoltán

    2012-09-01

    In case of performance critical applications programmers are often forced to write code at a low abstraction level. This leads to programs that are hard to develop and maintain because the program text is mixed up by low level optimization tricks and is far from the algorithm it implements. Even if compilers are smart nowadays and provide the user with many automatically applied optimizations, practice shows that in some cases it is hopeless to optimize the program automatically without the programmer's knowledge. A complementary approach is to allow the programmer to fine tune the program but provide him with language features that make the optimization easier. These are language abstractions that make optimization techniques explicit without adding too much syntactic noise to the program text. This paper presents such language abstractions for two well-known optimizations: bitvectors and SIMD (Single Instruction Multiple Data). The language features are implemented in the embedded domain specific language Feldspar which is specifically tailored for digital signal processing applications. While we present these language elements as part of Feldspar, the ideas behind them are general enough to be applied in other language definition projects as well.

  16. Qcompiler: Quantum compilation with the CSD method

    NASA Astrophysics Data System (ADS)

    Chen, Y. G.; Wang, J. B.

    2013-03-01

    In this paper, we present a general quantum computation compiler, which maps any given quantum algorithm to a quantum circuit consisting a sequential set of elementary quantum logic gates based on recursive cosine-sine decomposition. The resulting quantum circuit diagram is provided by directly linking the package output written in LaTeX to Qcircuit.tex . We illustrate the use of the Qcompiler package through various examples with full details of the derived quantum circuits. Besides its accuracy, generality and simplicity, Qcompiler produces quantum circuits with significantly reduced number of gates when the systems under study have a high degree of symmetry. Catalogue identifier: AENX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4321 No. of bytes in distributed program, including test data, etc.: 50943 Distribution format: tar.gz Programming language: Fortran. Computer: Any computer with a Fortran compiler. Operating system: Linux, Mac OS X 10.5 (and later). RAM: Depends on the size of the unitary matrix to be decomposed Classification: 4.15. External routines: Lapack (http://www.netlib.org/lapack/) Nature of problem: Decompose any given unitary operation into a quantum circuit with only elementary quantum logic gates. Solution method: This package decomposes an arbitrary unitary matrix, by applying the CSD algorithm recursively, into a series of block-diagonal matrices, which can then be readily associated with elementary quantum gates to form a quantum circuit. Restrictions: The only limitation is imposed by the available memory on the user's computer. Additional comments: This package is applicable for any arbitrary unitary matrices, both real and complex. If the

  17. Implementations of the optimal multigrid algorithm for the cell-centered finite difference on equilateral triangular grids

    SciTech Connect

    Ewing, R.E.; Saevareid, O.; Shen, J.

    1994-12-31

    A multigrid algorithm for the cell-centered finite difference on equilateral triangular grids for solving second-order elliptic problems is proposed. This finite difference is a four-point star stencil in a two-dimensional domain and a five-point star stencil in a three dimensional domain. According to the authors analysis, the advantages of this finite difference are that it is an O(h{sup 2})-order accurate numerical scheme for both the solution and derivatives on equilateral triangular grids, the structure of the scheme is perhaps the simplest, and its corresponding multigrid algorithm is easily constructed with an optimal convergence rate. They are interested in relaxation of the equilateral triangular grid condition to certain general triangular grids and the application of this multigrid algorithm as a numerically reasonable preconditioner for the lowest-order Raviart-Thomas mixed triangular finite element method. Numerical test results are presented to demonstrate their analytical results and to investigate the applications of this multigrid algorithm on general triangular grids.

  18. A new algorithm for ccomputing theory prime implicates compilations

    SciTech Connect

    Marquis, P.; Sadaoui, S.

    1996-12-31

    We present a new algorithm (called TPI/BDD) for computing the theory prime implicates compilation of a knowledge base {Sigma}. In contrast to many compilation algorithms, TPI/BDD does not require the prime implicates of {Sigma} to be generated. Since their number can easily be exponential in the size of {Sigma}, TPI/BDD can save a lot of computing. Thanks to TPI/BDD, we can now conceive of compiling knowledge bases impossible to before.

  19. Ground Operations Aerospace Language (GOAL). Volume 2: Compiler

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.

  20. 5 CFR 9701.524 - Compilation and publication of data.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Labor-Management Relations § 9701.524 Compilation...

  1. 5 CFR 9701.524 - Compilation and publication of data.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Labor-Management Relations § 9701.524 Compilation...

  2. Model compilation for real-time planning and diagnosis with feedback

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2005-01-01

    This paper describes MEXEC, an implemented micro executive that compiles a device model that can have feedback into a structure for subsequent evaluation. This system computes both the most likely current device mode from n sets of sensor measurements and the n-1 step reconfiguration plan that is most likely to result in reaching a target mode - if such a plan exists. A user tunes the system by increasing n to improve system capability at the cost of real-time performance.

  3. [The tasks of the Federal Service for the implementation of the legislation of the Russian Federation aimed at optimizing the compliance and enforcement].

    PubMed

    Onishchenko, G G

    2011-01-01

    The priority of the Federal Service's activity in protecting consumer rights and human welfare is to execute a number of basic documents recently endorsed and directed towards observing the legislation on the optimization of supervision and control activities. To implement measures on the professional orientation and prehigher education preparation of schoolchildren and on the assistance with their entrance into the medical prophylaxis faculties of higher medical educational establishments within the framework of target enrollment, etc. is an urgent problem for the agencies and bodies of the Russian Inspectorate for the Protection of Consumer Rights and Human Welfare. Interaction with civil society has recently been activated, which is required to ensure the transparency of the Service's work, to enhance its efficiency, and to optimize supervision. Public reception rooms have been set up, the function of which is to receive citizens, the representatives of legal persons, and individual employers concerning the matters of sanitary-and-epidemiological well-being, protection of the rights of consumers and a consumer market, and the activities of the agencies and bodies of the Russian Inspectorate for the Protection of Consumer Rights and Human Welfare. The better activities of the agencies and bodies of the Service will call for a set of complex tasks to be accomplished in the immediate future. The end result will depend on how competently, responsibly, and cooperatively the appropriate measures will be carried out in all the agencies of the Federal service. PMID:21513054

  4. Cross-Compiler for Modeling Space-Flight Systems

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    Ripples is a computer program that makes it possible to specify arbitrarily complex space-flight systems in an easy-to-learn, high-level programming language and to have the specification automatically translated into LibSim, which is a text-based computing language in which such simulations are implemented. LibSim is a very powerful simulation language, but learning it takes considerable time, and it requires that models of systems and their components be described at a very low level of abstraction. To construct a model in LibSim, it is necessary to go through a time-consuming process that includes modeling each subsystem, including defining its fault-injection states, input and output conditions, and the topology of its connections to other subsystems. Ripples makes it possible to describe the same models at a much higher level of abstraction, thereby enabling the user to build models faster and with fewer errors. Ripples can be executed in a variety of computers and operating systems, and can be supplied in either source code or binary form. It must be run in conjunction with a Lisp compiler.

  5. 41 CFR 105-69.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Semi-annual compilation. 105-69.600 Section 105-69.600 Public Contracts and Property Management Federal Property Management... Administration 69-NEW RESTRICTIONS ON LOBBYING Agency Reports § 105-69.600 Semi-annual compilation. (a) The...

  6. 40 CFR 34.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34.600 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE NEW RESTRICTIONS ON LOBBYING Agency Reports § 34.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B) and, on May 31 and November...

  7. 45 CFR 604.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION NEW RESTRICTIONS ON LOBBYING Agency Reports § 604.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see Appendix B) and, on May 31 and November 30 of each...

  8. 45 CFR 604.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION NEW RESTRICTIONS ON LOBBYING Agency Reports § 604.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see Appendix B) and, on May 31 and November 30 of each...

  9. 45 CFR 604.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION NEW RESTRICTIONS ON LOBBYING Agency Reports § 604.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B) and, on May 31 and November 30 of each...

  10. 45 CFR 604.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION NEW RESTRICTIONS ON LOBBYING Agency Reports § 604.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see Appendix B) and, on May 31 and November 30 of each...

  11. 45 CFR 604.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION NEW RESTRICTIONS ON LOBBYING Agency Reports § 604.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see Appendix B) and, on May 31 and November 30 of each...

  12. 38 CFR 45.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Semi-annual compilation. 45.600 Section 45.600 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) NEW RESTRICTIONS ON LOBBYING Agency Reports § 45.600 Semi-annual compilation. (a) The head...

  13. 7 CFR 1.21 - Compilation of new records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Compilation of new records. 1.21 Section 1.21 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.21 Compilation of new records. Nothing in 5 U.S.C. 552 or this subpart requires that any agency create a...

  14. 7 CFR 1.21 - Compilation of new records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Compilation of new records. 1.21 Section 1.21 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.21 Compilation of new records. Nothing in 5 U.S.C. 552 or this subpart requires that any agency create a...

  15. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND....46 Classification by association or compilation. (a) If two pieces of unclassified information...

  16. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND....46 Classification by association or compilation. (a) If two pieces of unclassified information...

  17. Applying Loop Optimizations to Object-oriented Abstractions Through General Classification of Array Semantics

    SciTech Connect

    Yi, Q; Quinlan, D

    2004-03-05

    Optimizing compilers have a long history of applying loop transformations to C and Fortran scientific applications. However, such optimizations are rare in compilers for object-oriented languages such as C++ or Java, where loops operating on user-defined types are left unoptimized due to their unknown semantics. Our goal is to reduce the performance penalty of using high-level object-oriented abstractions. We propose an approach that allows the explicit communication between programmers and compilers. We have extended the traditional Fortran loop optimizations with an open interface. Through this interface, we have developed techniques to automatically recognize and optimize user-defined array abstractions. In addition, we have developed an adapted constant-propagation algorithm to automatically propagate properties of abstractions. We have implemented these techniques in a C++ source-to-source translator and have applied them to optimize several kernels written using an array-class library. Our experimental results show that using our approach, applications using high-level abstractions can achieve comparable, and in cases superior, performance to that achieved by efficient low-level hand-written codes.

  18. A special purpose silicon compiler for designing supercomputing VLSI systems

    NASA Technical Reports Server (NTRS)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  19. ROSE: The Design of a General Tool for the Independent Optimization of Object-Oriented Frameworks

    SciTech Connect

    Davis, K.; Philip, B.; Quinlan, D.

    1999-05-18

    ROSE represents a programmable preprocessor for the highly aggressive optimization of C++ object-oriented frameworks. A fundamental feature of ROSE is that it preserves the semantics, the implicit meaning, of the object-oriented framework's abstractions throughout the optimization process, permitting the framework's abstractions to be recognized and optimizations to capitalize upon the added value of the framework's true meaning. In contrast, a C++ compiler only sees the semantics of the C++ language and thus is severely limited in what optimizations it can introduce. The use of the semantics of the framework's abstractions avoids program analysis that would be incapable of recapturing the framework's full semantics from those of the C++ language implementation of the application or framework. Just as no level of program analysis within the C++ compiler would not be expected to recognize the use of adaptive mesh refinement and introduce optimizations based upon such information. Since ROSE is programmable, additional specialized program analysis is possible which then compliments the semantics of the framework's abstractions. Enabling an optimization mechanism to use the high level semantics of the framework's abstractions together with a programmable level of program analysis (e.g. dependence analysis), at the level of the framework's abstractions, allows for the design of high performance object-oriented frameworks with uniquely tailored sophisticated optimizations far beyond the limits of contemporary serial F0RTRAN 77, C or C++ language compiler technology. In short, faster, more highly aggressive optimizations are possible. The resulting optimizations are literally driven by the framework's definition of its abstractions. Since the abstractions within a framework are of third party design the optimizations are similarly of third party design, specifically independent of the compiler and the applications that use the framework. The interface to ROSE is

  20. NACRE: A European Compilation of Reaction rates for Astrophysics

    SciTech Connect

    Angulo, Carmen

    1999-11-16

    We report on the program and results of the NACRE network (Nuclear Astrophysics Compilation of REaction rates). We have compiled low-energy cross section data for 86 charged-particle induced reactions involving light (1{<=}Z{<=}14) nuclei. The corresponding Maxwellian-averaged thermonuclear reactions rates are calculated in the temperature range from 10{sup 6} K to 10{sup 10} K. The web site http://pntpm.ulb.ac.be/nacre.htm, including the cross section data base and the reaction rates, allows users to browse electronically all the information on the reactions studied in this compilation.

  1. NACRE: A European Compilation of Reaction Rates for Astrophysics

    SciTech Connect

    Carmen Angulo

    1999-12-31

    We report on the program and results of the NACRE network (Nuclear Astrophysics Compilation of Reaction rates). We have compiled low-energy cross section data for 86 charged-particle induced reactions involving light (1 {<=} Z {<=} 14) nuclei. The corresponding Maxwellian-averaged thermonuclear reactions rates are calculated in the temperature range from 10{sup 6} K to 10{sup 10} K. The web site, http://pntpm.ulb.ac.be/nacre.htm, including the cross section data base and the reaction rates, allows users to browse electronically all the information on the reactions studied in this compilation.

  2. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The functional requirements to be met by the HAL/S-FC compiler, and the hardware and software compatibilities between the compiler system and the environment in which it operates are defined. Associated runtime facilities and the interface with the Software Development Laboratory are specified. The construction of the HAL/S-FC system as functionally separate units and the interfaces between those units is described. An overview of the system's capabilities is presented and the hardware/operating system requirements are specified. The computer-dependent aspects of the HAL/S-FC are also specified. Compiler directives are included.

  3. A Multiprocessor SoC Architecture with Efficient Communication Infrastructure and Advanced Compiler Support for Easy Application Development

    NASA Astrophysics Data System (ADS)

    Urfianto, Mohammad Zalfany; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki

    This paper presentss a Multiprocessor System-on-Chips (MPSoC) architecture used as an execution platform for the new C-language based MPSoC design framework we are currently developing. The MPSoC architecture is based on an existing SoC platform with a commercial RISC core acting as the host CPU. We extend the existing SoC with a multiprocessor-array block that is used as the main engine to run parallel applications modeled in our design framework. Utilizing several optimizations provided by our compiler, an efficient inter-communication between processing elements with minimum overhead is implemented. A host-interface is designed to integrate the existing RISC core to the multiprocessor-array. The experimental results show that an efficacious integration is achieved, proving that the designed communication module can be used to efficiently incorporate off-the-shelf processors as a processing element for MPSoC architectures designed using our framework.

  4. Optimizing HIV pre-exposure prophylaxis implementation among men who have sex with men in a large urban centre: a dynamic modelling study

    PubMed Central

    MacFadden, Derek R; Tan, Darrell H; Mishra, Sharmistha

    2016-01-01

    Introduction Once-daily tenofovir/emtricitabine-based pre-exposure prophylaxis (PrEP) can reduce HIV acquisition in men who have sex with men (MSM), by 44% in the iPrEx trial, and reaching up to 99% with high adherence. We examined the potential population-level impact and cost-effectiveness of different PrEP implementation strategies. Methods We developed a dynamic, stochastic compartmental model of HIV transmission among the estimated 57,400 MSM in Toronto, Canada. Parameterization was performed using local epidemiologic data. Strategies examined included (1) uniform PrEP delivery versus targeting the highest risk decile of MSM (with varying coverage proportions); (2) increasing PrEP efficacy as a surrogate of adherence (44% to 99%); and (3) varying HIV test frequency (once monthly to once yearly). Outcomes included HIV infections averted and the incremental cost ($CAD) per incremental quality-adjusted-life-year (QALY) gained over 20 years. Results Use of PrEP among all HIV-uninfected MSM at 25, 50, 75 and 100% coverage prevented 1970, 3427, 4317, and 4581 infections, respectively, with cost/QALY increasing from $500,000 to $800,000 CAD. Targeted PrEP for the highest risk MSM at 25, 50, 75 and 100% coverage prevented 1166, 2154, 2816, and 3012 infections, respectively, with cost/QALY ranging from $35,000 to $70,000 CAD. Maximizing PrEP efficacy, in a scenario of 25% coverage of high-risk MSM with PrEP, prevented 1540 infections with a cost/QALY of $15,000 CAD. HIV testing alone (Q3 months) averted 898 of infections with a cost savings of $4,000 CAD per QALY. Conclusions The optimal implementation strategy for PrEP over the next 20 years at this urban centre is to target high-risk MSM and to maximize efficacy by supporting PrEP adherence. A large health benefit of PrEP implementation could come from engaging undiagnosed HIV-infected individuals into care.

  5. Implementing the Global Plan to Stop TB, 2011–2015 – Optimizing Allocations and the Global Fund’s Contribution: A Scenario Projections Study

    PubMed Central

    Korenromp, Eline L.; Glaziou, Philippe; Fitzpatrick, Christopher; Floyd, Katherine; Hosseini, Mehran; Raviglione, Mario; Atun, Rifat; Williams, Brian

    2012-01-01

    control programs to implement a more optimal investment approach focusing on highest-impact populations and interventions. PMID:22719954

  6. Optimizing HIV pre-exposure prophylaxis implementation among men who have sex with men in a large urban centre: a dynamic modelling study

    PubMed Central

    MacFadden, Derek R; Tan, Darrell H; Mishra, Sharmistha

    2016-01-01

    Introduction Once-daily tenofovir/emtricitabine-based pre-exposure prophylaxis (PrEP) can reduce HIV acquisition in men who have sex with men (MSM), by 44% in the iPrEx trial, and reaching up to 99% with high adherence. We examined the potential population-level impact and cost-effectiveness of different PrEP implementation strategies. Methods We developed a dynamic, stochastic compartmental model of HIV transmission among the estimated 57,400 MSM in Toronto, Canada. Parameterization was performed using local epidemiologic data. Strategies examined included (1) uniform PrEP delivery versus targeting the highest risk decile of MSM (with varying coverage proportions); (2) increasing PrEP efficacy as a surrogate of adherence (44% to 99%); and (3) varying HIV test frequency (once monthly to once yearly). Outcomes included HIV infections averted and the incremental cost ($CAD) per incremental quality-adjusted-life-year (QALY) gained over 20 years. Results Use of PrEP among all HIV-uninfected MSM at 25, 50, 75 and 100% coverage prevented 1970, 3427, 4317, and 4581 infections, respectively, with cost/QALY increasing from $500,000 to $800,000 CAD. Targeted PrEP for the highest risk MSM at 25, 50, 75 and 100% coverage prevented 1166, 2154, 2816, and 3012 infections, respectively, with cost/QALY ranging from $35,000 to $70,000 CAD. Maximizing PrEP efficacy, in a scenario of 25% coverage of high-risk MSM with PrEP, prevented 1540 infections with a cost/QALY of $15,000 CAD. HIV testing alone (Q3 months) averted 898 of infections with a cost savings of $4,000 CAD per QALY. Conclusions The optimal implementation strategy for PrEP over the next 20 years at this urban centre is to target high-risk MSM and to maximize efficacy by supporting PrEP adherence. A large health benefit of PrEP implementation could come from engaging undiagnosed HIV-infected individuals into care. PMID:27665722

  7. Optimized pre-processing input plane GPU implementation of an optical face recognition technique using a segmented phase only composite filter

    NASA Astrophysics Data System (ADS)

    Ouerhani, Y.; Jridi, M.; Alfalou, A.; Brosseau, C.

    2013-02-01

    The key outcome of this work is to propose and validate a fast and robust correlation scheme for face recognition applications. The robustness of this fast correlator is ensured by an adapted pre-processing step for the target image allowing us to minimize the impact of its (possibly noisy and varying) amplitude spectrum information. A segmented composite filter is optimized, at the very outset of its fabrication, by weighting each reference with a specific coefficient which is proportional to the occurrence probability. A hierarchical classification procedure (called two-level decision tree learning approach) is also used in order to speed up the recognition procedure. Experimental results validating our approach are obtained with a prototype based on GPU implementation of the all-numerical correlator using the NVIDIA GPU GeForce 8400GS processor and test samples from the Pointing Head Pose Image Database (PHPID), e.g. true recognition rates larger than 85% with a run time lower than 120 ms have been obtained using fixed images from the PHPID, true recognition rates larger than 77% using a real video sequence with 2 frame per second when the database contains 100 persons. Besides, it has been shown experimentally that the use of more recent GPU processor like NVIDIA-GPU Quadro FX 770M can perform the recognition of 4 frame per second with the same length of database.

  8. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  9. Worldwide dataset of glacier thickness observations compiled by literature review

    NASA Astrophysics Data System (ADS)

    Naegeli, Kathrin; Gärtner-Roer, Isabelle; Hagg, Wilfried; Huss, Matthias; Machguth, Horst; Zemp, Michael

    2013-04-01

    The volume of glaciers and ice caps is still poorly known, although it is expected to contribute significantly to changes in the hydrological cycle and global sea level rise over the next decades. Studies presenting worldwide estimations are mostly based on modelling and scaling approaches and are usually calibrated with only few measurements. Direct investigations of glacier thickness, a crucial parameter for ice volume calculations, are rather sparse but nevertheless available from all around the globe. This study presents a worldwide compilation of glacier thickness observation data. Literature review revealed mean and/or maximum thickness values from 442 glaciers and ice caps, elevation band information and point measurements for 10 and 14 glaciers, respectively. Resulting in a dataset containing glaciers and ice caps with areas ranging from smaller than 0.1 km2 (e.g. Pizolgletscher, Switzerland) to larger than 10'000 km2 (e.g. Agassiz Ice Cap, Canada), mean ice thicknesses between 4 m (Blaueis, Germany) and 550 m (Aletschgletscher, Switzerland) and 64 values for ice masses with entries from different years. Thickness values are derived from various observation methods and cover a survey period between 1923 and 2011. A major advantage of the database is the included metadata, giving information about specific fields, such as the mean thickness value of Aletschgletscher, which is only valid for the investigation area Konkordiaplatz and not over the entire glacier. The relatively small collection of records in the two more detailed database levels reflects the poor availability of such data. For modelling purposes, where ice thicknesses are implemented to derive ice volumes, this database provides essential information about glacier and ice cap characteristics and enables the comparison between various approaches. However, the dataset offers a great variety of locations, thicknesses and surface areas of glaciers and ice caps and can therefore help to compare

  10. Solid state technology: A compilation. [on semiconductor devices

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A compilation, covering selected solid state devices developed and integrated into systems by NASA to improve performance, is presented. Data are also given on device shielding in hostile radiation environments.

  11. Materials: A compilation. [considering metallurgy, polymers, insulation, and coatings

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical information is provided for the properties and fabrication of metals and alloys, as well as for polymeric materials, such as lubricants, coatings, and insulation. Available patent information is included in the compilation.

  12. Compilation of giant electric dipole resonances built on excited states

    SciTech Connect

    Schiller, A. . E-mail: schiller@nscl.msu.edu; Thoennessen, M.

    2007-07-15

    Giant Electric Dipole Resonance (GDR) parameters for {gamma} decay to excited states with finite spin and temperature are compiled. Over 100 original works have been reviewed and from some 70 of them, about 350 sets of hot GDR parameters for different isotopes, excitation energies, and spin regions have been extracted. All parameter sets have been brought onto a common footing by calculating the equivalent Lorentzian parameters. The current compilation is complementary to an earlier compilation by Samuel S. Dietrich and Barry L. Berman (At. Data Nucl. Data Tables 38 (1988) 199-338) on ground-state photo-neutron and photo-absorption cross sections and their Lorentzian parameters. A comparison of the two may help shed light on the evolution of GDR parameters with temperature and spin. The present compilation is current as of July 2006.

  13. Rehosting and retargeting an Ada compiler: A design study

    NASA Technical Reports Server (NTRS)

    Robinson, Ray

    1986-01-01

    The goal of this study was to develop a plan for rehosting and retargeting the Air Force Armaments Lab. Ada cross compiler. This compiler was validated in Sept. 1985 using ACVC 1.6, is written in Pascal, is hosted on a CDC Cyber 170, and is targeted to an embedded Zilog Z8002. The study was performed to determine the feasibility, cost, time, and tasks required to retarget the compiler to a DEC VAX 11/78x and rehost it to an embedded U.S. Navy AN/UYK-44 computer. Major tasks identified were rehosting the compiler front end, rewriting the back end (code generator), translating the run time environment from Z8002 assembly language to AN/UYK-44 assembly language, and developing a library manager.

  14. An implementation of SISAL for distributed-memory architectures

    SciTech Connect

    Beard, P.C.

    1995-06-01

    This thesis describes a new implementation of the implicitly parallel functional programming language SISAL, for massively parallel processor supercomputers. The Optimizing SISAL Compiler (OSC), developed at Lawrence Livermore National Laboratory, was originally designed for shared-memory multiprocessor machines and has been adapted to distributed-memory architectures. OSC has been relatively portable between shared-memory architectures, because they are architecturally similar, and OSC generates portable C code. However, distributed-memory architectures are not standardized -- each has a different programming model. Distributed-memory SISAL depends on a layer of software that provides a portable, distributed, shared-memory abstraction. This layer is provided by Split-C, a dialect of the C programming language developed at U.C. Berkeley, which has demonstrated good performance on distributed-memory architectures. Split-C provides important capabilities for good performance: support for program-specific distributed data structures, and split-phase memory operations. Distributed data structures help achieve good memory locality, while split-phase memory operations help tolerate the longer communication latencies inherent in distributed-memory architectures. The distributed-memory SISAL compiler and run-time system takes advantage of these capabilities. The results of these efforts is a compiler that runs identically on the Thinking Machines Connection Machine (CM-5), and the Meiko Computing Surface (CS-2).

  15. Compiler writing system detail design specification. Volume 2: Component specification

    NASA Technical Reports Server (NTRS)

    Arthur, W. J.

    1974-01-01

    The logic modules and data structures composing the Meta-translator module are desribed. This module is responsible for the actual generation of the executable language compiler as a function of the input Meta-language. Machine definitions are also processed and are placed as encoded data on the compiler library data file. The transformation of intermediate language in target language object text is described.

  16. On search guide phrase compilation for recommending home medical products.

    PubMed

    Luo, Gang

    2010-01-01

    To help people find desired home medical products (HMPs), we developed an intelligent personal health record (iPHR) system that can automatically recommend HMPs based on users' health issues. Using nursing knowledge, we pre-compile a set of "search guide" phrases that provides semantic translation from words describing health issues to their underlying medical meanings. Then iPHR automatically generates queries from those phrases and uses them and a search engine to retrieve HMPs. To avoid missing relevant HMPs during retrieval, the compiled search guide phrases need to be comprehensive. Such compilation is a challenging task because nursing knowledge updates frequently and contains numerous details scattered in many sources. This paper presents a semi-automatic tool facilitating such compilation. Our idea is to formulate the phrase compilation task as a multi-label classification problem. For each newly obtained search guide phrase, we first use nursing knowledge and information retrieval techniques to identify a small set of potentially relevant classes with corresponding hints. Then a nurse makes the final decision on assigning this phrase to proper classes based on those hints. We demonstrate the effectiveness of our techniques by compiling search guide phrases from an occupational therapy textbook.

  17. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  18. LISP on a reduced-instruction-set processor: characterization and optimization

    SciTech Connect

    Steenkiste, P.A.

    1987-01-01

    As a result of advances in compiler technology, almost all programs are written in high-level languages, and the effectiveness of a computer architecture is determined by its suitability as a compiler target. This central role of compilers in the use of computers has led computer architects to study the implementation of high-level language programs. This thesis presents measurements for a set of Portable Standard LISP programs that were executed on a reduced-instruction-set processor (MIPS-X), examining what instructions LISP uses at the assembly level, and how much time is spent on the most-common primitive LISP operations. This information makes it possible to determine which operations are time-critical and to evaluate how well architectural features address these operations. Based on these data, three areas for optimization are proposed: the implementation of the tags used for run-time type checking, reducing the cost of procedure calls, and interprocedural register allocation. A number of methods to implement tags, both with and without hardware support, are presented,and the performance of the different implementation strategies is compared.

  19. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  20. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures. PMID:26529746

  1. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  2. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    PubMed

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  3. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    PubMed

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks. PMID:15032545

  4. SPUR Lisp: Design and implementation

    SciTech Connect

    Zorn, B.; Hilfinger, P.; Ho, K.; Larus, J.

    1987-01-01

    This document describes SPUR Lisp, a Common Lisp superset designed and implemented at U.C. Berkeley. Function calling sequences, system data structures, memory management policies, etc. are all described in detail. Reasons for the more important decisions are given. SPUR Lisp is implemented on BARB, a software simulator for SPUR hardware. In addition to describing the design of SPUR Lisp, this paper provides documentation for the BARB simulator, the SPUR Lisp compiler, and associated tools.

  5. A compilation of global bio-optical in situ data for ocean-colour satellite applications

    NASA Astrophysics Data System (ADS)

    Valente, André; Sathyendranath, Shubha; Brotas, Vanda; Groom, Steve; Grant, Michael; Taberner, Malcolm; Antoine, David; Arnone, Robert; Balch, William M.; Barker, Kathryn; Barlow, Ray; Bélanger, Simon; Berthon, Jean-François; Beşiktepe, Şükrü; Brando, Vittorio; Canuti, Elisabetta; Chavez, Francisco; Claustre, Hervé; Crout, Richard; Frouin, Robert; García-Soto, Carlos; Gibb, Stuart W.; Gould, Richard; Hooker, Stanford; Kahru, Mati; Klein, Holger; Kratzer, Susanne; Loisel, Hubert; McKee, David; Mitchell, Brian G.; Moisan, Tiffany; Muller-Karger, Frank; O'Dowd, Leonie; Ondrusek, Michael; Poulton, Alex J.; Repecaud, Michel; Smyth, Timothy; Sosik, Heidi M.; Twardowski, Michael; Voss, Kenneth; Werdell, Jeremy; Wernand, Marcel; Zibordi, Giuseppe

    2016-06-01

    A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GeP&CO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately. The compiled data are available at doi:10.1594/PANGAEA.854832 (Valente et al., 2015).

  6. Compilation of a standardised international folate database for EPIC.

    PubMed

    Nicolas, Geneviève; Witthöft, Cornelia M; Vignat, Jérôme; Knaze, Viktoria; Huybrechts, Inge; Roe, Mark; Finglas, Paul; Slimani, Nadia

    2016-02-15

    This paper describes the methodology applied for compiling an "international end-user" folate database. This work benefits from the unique dataset offered by the European Prospective Investigation into Cancer and Nutrition (EPIC) (N=520,000 subjects in 23 centres). Compilation was done in four steps: (1) identify folate-free foods then find folate values for (2) folate-rich foods common across EPIC countries, (3) the remaining "common" foods, and (4) "country-specific" foods. Compiled folate values were concurrently standardised in terms of unit, mode of expression and chemical analysis, using information in national food composition tables (FCT). 43-70% total folate values were documented as measured by microbiological assay. Foods reported in EPIC were either matched directly to FCT foods, treated as recipes or weighted averages. This work has produced the first standardised folate dataset in Europe, which was used to calculate folate intakes in EPIC; a prerequisite to study the relation between folate intake and diseases.

  7. Compilation of current high-energy-physics experiments

    SciTech Connect

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.

    1980-04-01

    This is the third edition of a compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and ten participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about January 1980, and (2) had not completed taking of data by 1 January 1976.

  8. Automated Vulnerability Detection for Compiled Smart Grid Software

    SciTech Connect

    Prowell, Stacy J; Pleszkoch, Mark G; Sayre, Kirk D; Linger, Richard C

    2012-01-01

    While testing performed with proper experimental controls can provide scientifically quantifiable evidence that software does not contain unintentional vulnerabilities (bugs), it is insufficient to show that intentional vulnerabilities exist, and impractical to certify devices for the expected long lifetimes of use. For both of these needs, rigorous analysis of the software itself is essential. Automated software behavior computation applies rigorous static software analysis methods based on function extraction (FX) to compiled software to detect vulnerabilities, intentional or unintentional, and to verify critical functionality. This analysis is based on the compiled firmware, takes into account machine precision, and does not rely on heuristics or approximations early in the analysis.

  9. [Medical translations and practical compilations: a necessary coincidence?].

    PubMed

    Boucher, Caroline; Dumas, Geneviève

    2012-01-01

    Fourteenth- and fifteenth-century medicine is characterised by a trickle-down effect which led to an increasing dissemination of knowledge in the vernacular. In this context, translations and compilations appear to be two similar endeavours aiming to provide access to contents pertaining to the particulars of medical practice. Nowhere is this phenomenon seen more clearly than in vernacular manuscripts on surgery. Our study proposes to compare for the first time two corpora of manuscripts of surgical compilations, in Middle French and Middle English respectively, in order to discuss form and matter in this type of book production.

  10. A DRAM compiler algorithm for high performance VLSI embedded memories

    NASA Technical Reports Server (NTRS)

    Eldin, A. G.

    1992-01-01

    In many applications, the limited density of the embedded SRAM does not allow integrating the memory on the same chip with other logic and functional blocks. In such cases, the embedded DRAM provides the optimum combination of very high density, low power, and high performance. For ASIC's to take full advantage of this design strategy, an efficient and highly reliable DRAM compiler must be used. The embedded DRAM architecture, cell, and peripheral circuit design considerations and the algorithm of a high performance memory compiler are presented .

  11. 5 CFR 9701.524 - Compilation and publication of data.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Compilation and publication of data. 9701... publication of data. (a) The HSLRB must maintain a file of its proceedings and copies of all available... actions taken under § 9701.519. (b) All files maintained under paragraph (a) of this section must be...

  12. 5 CFR 9701.524 - Compilation and publication of data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Compilation and publication of data. 9701... publication of data. (a) The HSLRB must maintain a file of its proceedings and copies of all available... actions taken under § 9701.519. (b) All files maintained under paragraph (a) of this section must be...

  13. 44 CFR 18.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Semi-annual compilation. 18.600 Section 18.600 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL NEW RESTRICTIONS ON LOBBYING Agency Reports § 18.600...

  14. 12 CFR 203.4 - Compilation of loan data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... required to report data on small business, small farm, and community development lending under CRA. Banks... 12 Banks and Banking 2 2010-01-01 2010-01-01 false Compilation of loan data. 203.4 Section 203.4... updates). (12)(i) For originated loans subject to Regulation Z, 12 CFR part 226, the difference...

  15. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  16. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  17. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  18. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING... Secretary of the Senate and the Clerk of the House of Representatives a report containing a compilation of... Intelligence of the Senate, the Permanent Select Committee on Intelligence of the House of Representatives,...

  19. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING... Secretary of the Senate and the Clerk of the House of Representatives a report containing a compilation of... Intelligence of the Senate, the Permanent Select Committee on Intelligence of the House of Representatives,...

  20. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING... Secretary of the Senate and the Clerk of the House of Representatives a report containing a compilation of... Intelligence of the Senate, the Permanent Select Committee on Intelligence of the House of Representatives,...

  1. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING... Secretary of the Senate and the Clerk of the House of Representatives a report containing a compilation of... Intelligence of the Senate, the Permanent Select Committee on Intelligence of the House of Representatives,...

  2. A compilation of chase work characterizes this image, looking south, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A compilation of chase work characterizes this image, looking south, in the niche which slightly separates E Building form R Building, on the north side - Department of Energy, Mound Facility, Electronics Laboratory Building (E Building), One Mound Road, Miamisburg, Montgomery County, OH

  3. Guidelines for the Compilation of Union Catalogues of Serials.

    ERIC Educational Resources Information Center

    International Federation of Library Associations and Institutions, London (England).

    Intended for use in countries planning the establishment of new union catalogs, as well as in countries with long established traditions of library resource sharing, this document provides guiding principles and outlines standard methods and practices for the compilation of union catalogs of serials. Following definitions of relevant terminology…

  4. Solubility data are compiled for metals in liquid zinc

    NASA Technical Reports Server (NTRS)

    Dillon, I. G.; Johnson, I.

    1967-01-01

    Available data is compiled on the solubilities of various metals in liquid zinc. The temperature dependence of the solubility data is expressed using the empirical straight line relationship existing between the logarithm of the solubility and the reciprocal of the absolute temperature.

  5. 14 CFR 1203.302 - Combination, interrelation or compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Combination, interrelation or compilation. 1203.302 Section 1203.302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM Classification Principles and Considerations § 1203.302...

  6. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Semi-annual compilation. 146.600 Section 146.600 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW RESTRICTIONS ON LOBBYING.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  7. Compilation of historical information of 300 Area facilities and activities

    SciTech Connect

    Gerber, M.S.

    1992-12-01

    This document is a compilation of historical information of the 300 Area activities and facilities since the beginning. The 300 Area is shown as it looked in 1945, and also a more recent (1985) look at the 300 Area is provided.

  8. 45 CFR 1168.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Semi-annual compilation. 1168.600 Section 1168.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING Agency Reports § 1168.600...

  9. 45 CFR 1168.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false Semi-annual compilation. 1168.600 Section 1168.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING Agency Reports § 1168.600...

  10. The Future of Work: Some Prospects and Perspectives. A Compilation.

    ERIC Educational Resources Information Center

    Cho, DaeYeon; Imel, Susan

    The question of what the future of work in the United States will be is examined in this publication using current information on trends and issues related to work, the economy, and the labor force. The compilation intended to give an overview of selected aspects of the topic and provide information about other resources. In the first section,…

  11. Parallel compilation: A design and its application to SIMULA 67

    NASA Technical Reports Server (NTRS)

    Schwartz, R. L.

    1977-01-01

    A design is presented for a parallel compilation facility for the SIMULA 67 programming language. The proposed facility allows top-down, bottom-up, or parallel development and integration of program modules. An evaluation of the proposal and a discussion of its applicability to other languages are then given.

  12. Electronic circuits: A compilation. [for electronic equipment in telecommunication

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A compilation containing articles on newly developed electronic circuits and systems is presented. It is divided into two sections: (1) section 1 on circuits and techniques of particular interest in communications technology, and (2) section 2 on circuits designed for a variety of specific applications. The latest patent information available is also given. Circuit diagrams are shown.

  13. Investigating the Scope of an Advance Organizer for Compiler Concepts.

    ERIC Educational Resources Information Center

    Levine, Lawrence H.; Loerinc, Beatrice M.

    1985-01-01

    Investigates effectiveness of advance organizers for teaching functioning and use of compilers to undergraduate students in computer science courses. Two experimental groups used the advance organizer while two control groups did not. Findings indicate that an explicitly concept-directed organizer is effective in providing a framework for…

  14. 45 CFR 1168.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 3 2013-10-01 2013-10-01 false Semi-annual compilation. 1168.600 Section 1168.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING Agency Reports § 1168.600...

  15. 45 CFR 1168.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 3 2012-10-01 2012-10-01 false Semi-annual compilation. 1168.600 Section 1168.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING Agency Reports § 1168.600...

  16. 45 CFR 1168.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 3 2014-10-01 2014-10-01 false Semi-annual compilation. 1168.600 Section 1168.600 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING Agency Reports § 1168.600...

  17. Compiling Lisp for evaluation on a tightly coupled multiprocessor

    SciTech Connect

    Harrison, W.L. III

    1986-03-20

    The problem of compiling Lisp for efficient evaluation on a large, tightly coupled, shared memory multiprocessor is investigated. A representation for s-expressions which facilitates parallel evaluation is proposed, along with a sequence of transformations, to be applied to the functions comprising a Lisp program, which reveal and exploit parallelism. 26 refs., 170 figs.

  18. 5 CFR 9701.524 - Compilation and publication of data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....524 Section 9701.524 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM (DEPARTMENT OF HOMELAND SECURITY-OFFICE OF PERSONNEL MANAGEMENT) DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Labor-Management Relations § 9701.524 Compilation...

  19. Analysis of shared data structures for compile-time garbage collection in logic programs

    SciTech Connect

    Mulkers, A.; Bruynooghe, M. . Dept. Computerwetenschappen); Winsborough, W. )

    1990-01-01

    One of the central problems in program analysis for compile-time garbage collection is detecting the sharing of term substructure that can occur during program execution. We present an abstract domain for representing possibly shared structures and an abstract unification operation based on this domain. When supplied to an abstract interpretation framework, this domain induces a powerful analysis of shared structures. We show that the analysis is sound by relating the abstract domain and operation to variants of the concrete domain and operation (substitutions with term unification) that are augmented with information about the term structures shared in actual implementations. We show these instrumented versions of the concrete domain and operation characterize the takes place in standard implementations. 22 refs., 3 figs.

  20. Bitwise identical compiling setup: prospective for reproducibility and reliability of Earth system modeling

    NASA Astrophysics Data System (ADS)

    Li, R.; Liu, L.; Yang, G.; Zhang, C.; Wang, B.

    2016-02-01

    Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is an essential technical support for Earth system modeling. With the fast development of computer software and hardware, a compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change in compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of code may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical) results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  1. Bitwise identical compiling setup: prospective for reproducibility and reliability of earth system modeling

    NASA Astrophysics Data System (ADS)

    Li, R.; Liu, L.; Yang, G.; Zhang, C.; Wang, B.

    2015-11-01

    Reproducibility and reliability are fundamental principles of scientific research. A compiling setup that includes a specific compiler version and compiler flags is essential technical supports for Earth system modeling. With the fast development of computer software and hardware, compiling setup has to be updated frequently, which challenges the reproducibility and reliability of Earth system modeling. The existing results of a simulation using an original compiling setup may be irreproducible by a newer compiling setup because trivial round-off errors introduced by the change of compiling setup can potentially trigger significant changes in simulation results. Regarding the reliability, a compiler with millions of lines of codes may have bugs that are easily overlooked due to the uncertainties or unknowns in Earth system modeling. To address these challenges, this study shows that different compiling setups can achieve exactly the same (bitwise identical) results in Earth system modeling, and a set of bitwise identical compiling setups of a model can be used across different compiler versions and different compiler flags. As a result, the original results can be more easily reproduced; for example, the original results with an older compiler version can be reproduced exactly with a newer compiler version. Moreover, this study shows that new test cases can be generated based on the differences of bitwise identical compiling setups between different models, which can help detect software bugs or risks in the codes of models and compilers and finally improve the reliability of Earth system modeling.

  2. Implementation of design of experiments for optimization of forced degradation conditions and development of a stability-indicating method for furosemide.

    PubMed

    Kurmi, Moolchand; Kumar, Sanjay; Singh, Bhupinder; Singh, Saranjit

    2014-08-01

    The study involved optimization of forced degradation conditions and development of a stability-indicating method (SIM) for furosemide employing the design of experiment (DoE) concept. The optimization of forced degradation conditions, especially hydrolytic and oxidative, was done by application of 2(n) full factorial designs, which helped to obtain the targeted 20-30% drug degradation and also enriched levels of degradation products (DPs). For the selective separation of the drug and its DPs for the development of SIM, DoE was applied in three different stages, i.e., primary parameter selection, secondary parameter screening and method optimization. For these three, IV-optimal, Taguchi orthogonal array and face-centred central composite designs were employed, respectively. The organic modifier, buffer pH, gradient time and initial hold time were selected as primary parameters. Initial and final organic modifier percentage, and flow rate came out as critical parameters during secondary parameter screening, which were further evaluated during method optimization. Based on DoE results, an optimized method was obtained wherein a total of twelve DPs were separated successfully. The study also exposed the degradation behaviour of the drug in different forced degradation conditions. PMID:24742772

  3. Compilation and development of K-6 aerospace materials for implementation in NASA spacelink electronic information system

    NASA Technical Reports Server (NTRS)

    Blake, Jean A.

    1987-01-01

    Spacelink is an electronic information service to be operated by the Marshall Space Flight Center. It will provide NASA news and educational resources including software programs that can be accessed by anyone with a computer and modem. Spacelink is currently being installed and will soon begin service. It will provide daily updates of NASA programs, information about NASA educational services, manned space flight, unmanned space flight, aeronautics, NASA itself, lesson plans and activities, and space program spinoffs. Lesson plans and activities were extracted from existing NASA publications on aerospace activities for the elementary school. These materials were arranged into 206 documents which have been entered into the Spacelink program for use in grades K-6.

  4. Compilation of VS30 Data for the United States

    USGS Publications Warehouse

    Yong, Alan; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Odum, Jack K.; Stephenson, William J.; Haefner, Scott

    2016-01-01

    VS30, the time-averaged shear-wave velocity (VS) to a depth of 30 meters, is a key index adopted by the earthquake engineering community to account for seismic site conditions. VS30 is typically based on geophysical measurements of VS derived from invasive and noninvasive techniques at sites of interest. Owing to cost considerations, as well as logistical and environmental concerns, VS30 data are sparse or not readily available for most areas. Where data are available, VS30 values are often assembled in assorted formats that are accessible from disparate and (or) impermanent Web sites. To help remedy this situation, we compiled VS30 measurements obtained by studies funded by the U.S. Geological Survey (USGS) and other governmental agencies. Thus far, we have compiled VS30 values for 2,997 sites in the United States, along with metadata for each measurement from government-sponsored reports, Web sites, and scientific and engineering journals. Most of the data in our VS30 compilation originated from publications directly reporting the work of field investigators. A small subset (less than 20 percent) of VS30 values was previously compiled by the USGS and other research institutions. Whenever possible, VS30 originating from these earlier compilations were crosschecked against published reports. Both downhole and surface-based VS30 estimates are represented in our VS30 compilation. Most of the VS30 data are for sites in the western contiguous United States (2,141 sites), whereas 786 VS30 values are for sites in the Central and Eastern United States; 70 values are for sites in other parts of the United States, including Alaska (15 sites), Hawaii (30 sites), and Puerto Rico (25 sites). An interactive map is hosted on the primary USGS Web site for accessing VS30 data (http://earthquake.usgs.gov/research/vs30/).

  5. Compilation of DNA sequences of Escherichia coli (update 1990)

    PubMed Central

    Kröger, Manfred; Wahl, Ralf; Rice, Peter

    1990-01-01

    We have compiled the DNA sequence data for E.coli available from the GENBANK and EMBL data libraries and over a period of several years independently from the literature. This is the second listing replacing and increasing the former listing roughly by one third. After deletion of all detected overlaps a total of 1 248 696 individual bp is found to be determined till the beginning of 1990. This corresponds to a total of 26.46% of the entire E.coli chromosome consisting of about 4,720 kbp. This number may actually be higher by some extra 2% derived from the sequence of lysogenic bacteriophage lambda and various insertion sequences. This compilation is now available in machine readable form from the EMBL data library. PMID:2185457

  6. The hinterland: compilation of nearby brown dwarfs and ultracool stars

    NASA Astrophysics Data System (ADS)

    Ramos, Christopher David

    This work is a compilation and analysis of ultracool dwarfs (UCDs) and brown dwarfs within 25 parsecs. It supplements the work of Stauffer et al. [2010] who updated the reputable and widely relied upon Third Catalog of Nearby Stars [Gliese & Jahreiß 1991] with revised coordinates and cross-matched each object with the 2MASS point source catalog [Cutri et al. 2003]. I began by incorporating newly discovered (post 1991) cool companions to Gliese-Jahreiß stars that had been previously undetectable. I then expanded the compilation to include isolated UCDs and other nearby systems with at least one UCD component. Multiple systems are a panacea for astrophysical problems: by applying Kepler's laws, the model-independent mass of brown dwarfs and low mass stars can be determined and hence serve to constrain theory. This work puts this data into context by exploring the history of brown dwarf theory and reviewing open questions concerning their nature.

  7. The NASA earth resources spectral information system: A data compilation

    NASA Technical Reports Server (NTRS)

    Leeman, V.; Earing, D.; Vincent, R. K.; Ladd, S.

    1971-01-01

    The NASA Earth Resources Spectral Information System and the information contained therein are described. It contains an ordered, indexed compilation of natural targets in the optical region from 0.3 to 45.0 microns. The data compilation includes approximately 100 rock and mineral, 2600 vegetation, 1000 soil, and 60 water spectral reflectance, transmittance, and emittance curves. Most of the data have been categorized by subject, and the curves in those subject areas have been plotted on a single graph. Those categories with too few curves and miscellaneous categories have been plotted as single-curve graphs. Each graph, composite of single, is fully titled to indicate curve source and is indexed by subject to facilitate user retrieval.

  8. Compiler analysis for irregular problems in FORTRAN D

    NASA Technical Reports Server (NTRS)

    Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel

    1992-01-01

    We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.

  9. Compilation of gallium resource data for bauxite deposits

    USGS Publications Warehouse

    Schulte, Ruth F.; Foley, Nora K.

    2014-01-01

    Gallium (Ga) concentrations for bauxite deposits worldwide have been compiled from the literature to provide a basis for research regarding the occurrence and distribution of Ga worldwide, as well as between types of bauxite deposits. In addition, this report is an attempt to bring together reported Ga concentration data into one database to supplement ongoing U.S. Geological Survey studies of critical mineral resources. The compilation of Ga data consists of location, deposit size, bauxite type and host rock, development status, major oxide data, trace element (Ga) data and analytical method(s) used to derive the data, and tonnage values for deposits within bauxite provinces and districts worldwide. The range in Ga concentrations for bauxite deposits worldwide is

  10. Oxygen isotope studies and compilation of isotopic dates from Antarctica

    SciTech Connect

    Grootes, P.M.; Stuiver, M.

    1986-01-01

    The Quaternary Isotope Laboratory, alone or in collaboration with other investigators, is currently involved in a number of oxygen-isotope studies mainly in Antarctica. Studies of a drill core from the South Pole, seasonal oxygen-18 signals preserved in the Dominion Range, isotope dating of the Ross Ice Shelf, oxygen-18 profiles of the Siple Coast, McMurdo Ice Shelf sampling, and a data compilation of radiometric dates from Antarctica are discussed.

  11. Compile-time estimation of communication costs in multicomputers

    NASA Technical Reports Server (NTRS)

    Gupta, Manish; Banerjee, Prithviraj

    1991-01-01

    An important problem facing numerous research projects on parallelizing compilers for distributed memory machines is that of automatically determining a suitable data partitioning scheme for a program. Any strategy for automatic data partitioning needs a mechanism for estimating the performance of a program under a given partitioning scheme, the most crucial part of which involves determining the communication costs incurred by the program. A methodology is described for estimating the communication costs at compile-time as functions of the numbers of processors over which various arrays are distributed. A strategy is described along with its theoretical basis, for making program transformations that expose opportunities for combining of messages, leading to considerable savings in the communication costs. For certain loops with regular dependences, the compiler can detect the possibility of pipelining, and thus estimate communication costs more accurately than it could otherwise. These results are of great significance to any parallelization system supporting numeric applications on multicomputers. In particular, they lay down a framework for effective synthesis of communication on multicomputers from sequential program references.

  12. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  13. The Optimizing Patient Transfers, Impacting Medical Quality, andImproving Symptoms:Transforming Institutional Care approach: preliminary data from the implementation of a Centers for Medicare and Medicaid Services nursing facility demonstration project.

    PubMed

    Unroe, Kathleen T; Nazir, Arif; Holtz, Laura R; Maurer, Helen; Miller, Ellen; Hickman, Susan E; La Mantia, Michael A; Bennett, Merih; Arling, Greg; Sachs, Greg A

    2015-01-01

    The Optimizing Patient Transfers, Impacting Medical Quality, and Improving Symptoms: Transforming Institutional Care (OPTIMISTIC) project aims to reduce avoidable hospitalizations of long-stay residents enrolled in 19 central Indiana nursing facilities. This clinical demonstration project, funded by the Centers for Medicare and Medicaid Services Innovations Center, places a registered nurse in each nursing facility to implement an evidence-based quality improvement program with clinical support from nurse practitioners. A description of the model is presented, and early implementation experiences during the first year of the project are reported. Important elements include better medical care through implementation of Interventions to Reduce Acute Care Transfers tools and chronic care management, enhanced transitional care, and better palliative care with a focus on systematic advance care planning. There were 4,035 long-stay residents in 19 facilities enrolled in OPTIMISTIC between February 2013 and January 2014. Root-cause analyses were performed for all 910 acute transfers of these long stay residents. Of these transfers, the project RN evaluated 29% as avoidable (57% were not avoidable and 15% were missing), and opportunities for quality improvement were identified in 54% of transfers. Lessons learned in early implementation included defining new clinical roles, integrating into nursing facility culture, managing competing facility priorities, communicating with multiple stakeholders, and developing a system for collecting and managing data. The success of the overall initiative will be measured primarily according to reduction in avoidable hospitalizations of long-stay nursing facility residents. PMID:25537789

  14. An investigation of the optimization of parameters affecting the implementation of fourier transform spectroscopy at 20-500 micron from the C-141 airborne infrared observatory

    NASA Technical Reports Server (NTRS)

    Thompson, R. I.; Erickson, E. F.

    1976-01-01

    A program for 20-500 micron spectroscopy from the NASA flying C141 infrared observatory is being carried out with a Michelson interferometer. The parameters affecting the performance of the instrument are studied and an optimal configuration for high performance on the C-141 aircraft is recommended. As each parameter is discussed the relative merits of the two modes of mirror motion (rapid scan or step and integrate) are presented.

  15. The Delay Phenomenon: A Compilation of Knowledge across Specialties

    PubMed Central

    Hamilton, Kristy; Wolfswinkel, Erik M.; Weathers, William M.; Xue, Amy S.; Hatef, Daniel A.; Izaddoost, Shayan; Hollier, Larry H.

    2014-01-01

    Objective The purpose of this article is to review and integrate the available literature in different fields to gain a better understanding of the basic physiology and optimize vascular delay as a reconstructive surgery technique. Methods A broad search of the literature was performed using the Medline database. Two queries were performed using “vascular delay,” a search expected to yield perspectives from the field of plastic and reconstructive surgery, and “ischemic preconditioning,” (IPC) which was expected to yield research on the same topic in other fields. Results The combined searches yielded a total of 1824 abstracts. The “vascular delay” query yielded 76 articles from 1984 to 2011. The “ischemic preconditioning” query yielded 6534 articles, ranging from 1980 to 2012. The abstracts were screened for those from other specialties in addition to reconstructive surgery, analyzed potential or current uses of vascular delay in practice, or provided developments in understanding the pathophysiology of vascular delay. 70 articles were identified that met inclusion criteria and were applicable to vascular delay or ischemic preconditioning. Conclusion An understanding of IPC's implementation and mechanisms in other fields has beneficial implications for the field of reconstructive surgery in the context of the delay phenomenon. Despite an incomplete model of IPC's pathways, the anti-oxidative, anti-apoptotic and anti-inflammatory benefits of IPC are well recognized. The activation of angiogenic genes through IPC could allow for complex flap design, even in poorly vascularized regions. IPC's promotion of angiogenesis and reduction of endothelial dysfunction remain most applicable to reconstructive surgery in reducing graft-related complications and flap failure. PMID:25071876

  16. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  17. The Concert system - Compiler and runtime technology for efficient concurrent object-oriented programming

    NASA Technical Reports Server (NTRS)

    Chien, Andrew A.; Karamcheti, Vijay; Plevyak, John; Sahrawat, Deepak

    1993-01-01

    Concurrent object-oriented languages, particularly fine-grained approaches, reduce the difficulty of large scale concurrent programming by providing modularity through encapsulation while exposing large degrees of concurrency. Despite these programmability advantages, such languages have historically suffered from poor efficiency. This paper describes the Concert project whose goal is to develop portable, efficient implementations of fine-grained concurrent object-oriented languages. Our approach incorporates aggressive program analysis and program transformation with careful information management at every stage from the compiler to the runtime system. The paper discusses the basic elements of the Concert approach along with a description of the potential payoffs. Initial performance results and specific plans for system development are also detailed.

  18. YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs

    SciTech Connect

    Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste; Ferrandi, Fabrizio

    2013-10-03

    Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expected performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.

  19. The Nippon Foundation / GEBCO Indian Ocean Bathymetric Compilation Project

    NASA Astrophysics Data System (ADS)

    Wigley, R. A.; Hassan, N.; Chowdhury, M. Z.; Ranaweera, R.; Sy, X. L.; Runghen, H.; Arndt, J. E.

    2014-12-01

    The Indian Ocean Bathymetric Compilation (IOBC) project, undertaken by Nippon Foundation / GEBCO Scholars, is focused on building a regional bathymetric data compilation, of all publically-available bathymetric data within the Indian Ocean region from 30°N to 60° S and 10° to 147° E. One of the objectives of this project is the creation of a network of Nippon Foundation / GEBCO Scholars working together, derived from the thirty Scholars from fourteen nations bordering on the Indian Ocean who have graduated from this Postgraduate Certificate in Ocean Bathymetry (PCOB) training program training program at the University of New Hampshire. The IOBC project has provided students a working example during their course work and has been used as basis for student projects during their visits to another Laboratory at the end of their academic year. This multi-national, multi-disciplinary project team will continue to build on the skills gained during the PCOB program through additional training. The IOBC is being built using the methodology developed for the International Bathymetric Chart of the Southern Ocean (IBCSO) compilation (Arndt et al., 2013). This skill was transferred, through training workshops, to further support the ongoing development within the scholar's network. This capacity-building project is envisioned to connect other personnel from within all of the participating nations and organizations, resulting in additional capacity-building in this field of multi-resolution bathymetric grid generation in their home communities. An updated regional bathymetric map and grids of the Indian Ocean will be an invaluable tool for all fields of marine scientific research and resource management. In addition, it has implications for increased public safety by offering the best and most up-to-date depth data for modeling regional-scale oceanographic processes such as tsunami-wave propagation behavior amongst others.

  20. Compilation of a Global GIS Crater Database for the Moon

    NASA Astrophysics Data System (ADS)

    Barlow, Nadine G.; Mest, S. C.; Gibbs, V. B.; Kinser, R. M.

    2012-10-01

    We are using primarily Lunar Reconnaissance Orbiter (LRO) information to compile a new global database of lunar impact craters 5 km in diameter and larger. Each crater’s information includes coordinates of the crater center (ULCN 2005), crater diameter (major and minor diameters if crater is elliptical), azimuthal angle of orientation if crater is elliptical, ejecta and interior morphologies if present, crater preservation state, geologic unit, floor depth, average rim height, central peak height and basal diameter if present, and elevation and elemental/mineralogy data of surroundings. LROC WAC images are used in ArcGIS to obtain crater diameters and central coordinates and LROC WAC and NAC images are used to classify interior and ejecta morphologies. Gridded and individual spot data from LOLA are used to obtain crater depths, rim heights, and central peak height and basal diameter. Crater preservational state is based on crater freshness as determined by the presence/absence of specific interior and ejecta morphologies and elevated crater rim together with the ratio of current crater depth to depth expected for fresh crater of identical size. The crater database currently contains data on over 15,000 craters covering 80% of the nearside and 15% of the farside. We also include information allowing cross-correlation of craters in our database with those in existing crater catalogs, including the ground-based “System of Lunar Craters” by Arthur et al. (1963-1966), the Lunar Orbiter/Apollo-based crater catalog compiled by Andersson and Whitaker (1982), and the Apollo-based morphometric crater database by Pike (1980). We find significant differences in crater diameter and classification between these earlier crater catalogs and our new compilation. Utilizing the capability of GIS to overlay different datasets, we will report on how specific crater features such as central peaks, wall terraces, and impact melt deposits correlate with parameters such as elevation

  1. Recent Efforts in Data Compilations for Nuclear Astrophysics

    SciTech Connect

    Dillmann, Iris

    2008-05-21

    Some recent efforts in compiling data for astrophysical purposes are introduced, which were discussed during a JINA-CARINA Collaboration meeting on 'Nuclear Physics Data Compilation for Nucleosynthesis Modeling' held at the ECT* in Trento/Italy from May 29th-June 3rd, 2007. The main goal of this collaboration is to develop an updated and unified nuclear reaction database for modeling a wide variety of stellar nucleosynthesis scenarios. Presently a large number of different reaction libraries (REACLIB) are used by the astrophysics community. The 'JINA Reaclib Database' on http://www.nscl.msu.edu/{approx}nero/db/ aims to merge and fit the latest experimental stellar cross sections and reaction rate data of various compilations, e.g. NACRE and its extension for Big Bang nucleosynthesis, Caughlan and Fowler, Iliadis et al., and KADoNiS.The KADoNiS (Karlsruhe Astrophysical Database of Nucleosynthesis in Stars, http://nuclear-astrophysics.fzk.de/kadonis) project is an online database for neutron capture cross sections relevant to the s process. The present version v0.2 is already included in a REACLIB file from Basel university (http://download.nucastro.org/astro/reaclib). The present status of experimental stellar (n,{gamma}) cross sections in KADoNiS is shown. It contains recommended cross sections for 355 isotopes between {sup 1}H and {sup 210}Bi, over 80% of them deduced from experimental data.A ''high priority list'' for measurements and evaluations for light charged-particle reactions set up by the JINA-CARINA collaboration is presented. The central web access point to submit and evaluate new data is provided by the Oak Ridge group via the http://www.nucastrodata.org homepage. 'Workflow tools' aim to make the evaluation process transparent and allow users to follow the progress.

  2. Current trends in seasonal ice storage. [Compilation of projects

    SciTech Connect

    Gorski, A.J.

    1986-05-01

    This document is a compilation of modern research projects focused upon the use of naturally grown winter ice for summer cooling applications. Unlike older methods of ice-based cooling, in which ice was cut from rivers and lakes and transported to insulated icehouses, modern techniques grow ice directly in storage containers - by means of heat pipes, snow machines, and water sprays - at the site of application. This modern adaptation of an old idea was reinvented independently at several laboratories in the United States and Canada. Applications range from air conditioning and food storage to desalinization.

  3. Compilation of Henry's law constants, version 3.99

    NASA Astrophysics Data System (ADS)

    Sander, R.

    2014-11-01

    Many atmospheric chemicals occur in the gas phase as well as in liquid cloud droplets and aerosol particles. Therefore, it is necessary to understand the distribution between the phases. According to Henry's law, the equilibrium ratio between the abundances in the gas phase and in the aqueous phase is constant for a dilute solution. Henry's law constants of trace gases of potential importance in environmental chemistry have been collected and converted into a uniform format. The compilation contains 14775 values of Henry's law constants for 3214 species, collected from 639 references. It is also available on the internet at http://www.henrys-law.org.

  4. Computer programs: Information retrieval and data analysis, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

  5. Solid phase microextraction coupled with comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry for high-resolution metabolite profiling in apples: implementation of structured separations for optimization of sample preparation procedure in complex samples.

    PubMed

    Risticevic, Sanja; DeEll, Jennifer R; Pawliszyn, Janusz

    2012-08-17

    Metabolomics currently represents one of the fastest growing high-throughput molecular analysis platforms that refer to the simultaneous and unbiased analysis of metabolite pools constituting a particular biological system under investigation. In response to the ever increasing interest in development of reliable methods competent with obtaining a complete and accurate metabolomic snapshot for subsequent identification, quantification and profiling studies, the purpose of the current investigation is to test the feasibility of solid phase microextraction for advanced fingerprinting of volatile and semivolatile metabolites in complex samples. In particular, the current study is focussed on the development and optimization of solid phase microextraction (SPME) - comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC × GC-ToFMS) methodology for metabolite profiling of apples (Malus × domestica Borkh.). For the first time, GC × GC attributes in terms of molecular structure-retention relationships and utilization of two-dimensional separation space on orthogonal GC × GC setup were exploited in the field of SPME method optimization for complex sample analysis. Analytical performance data were assessed in terms of method precision when commercial coatings are employed in spiked metabolite aqueous sample analysis. The optimized method consisted of the implementation of direct immersion SPME (DI-SPME) extraction mode and its application to metabolite profiling of apples, and resulted in a tentative identification of 399 metabolites and the composition of a metabolite database far more comprehensive than those obtainable with classical one-dimensional GC approaches. Considering that specific metabolome constituents were for the first time reported in the current study, a valuable approach for future advanced fingerprinting studies in the field of fruit biology is proposed. The current study also intensifies the understanding of SPME

  6. Branch strategies - Modeling and optimization

    NASA Technical Reports Server (NTRS)

    Dubey, Pradeep K.; Flynn, Michael J.

    1991-01-01

    The authors provide a common platform for modeling different schemes for reducing the branch-delay penalty in pipelined processors as well as evaluating the associated increased instruction bandwidth. Their objective is twofold: to develop a model for different approaches to the branch problem and to help select an optimal strategy after taking into account additional i-traffic generated by branch strategies. The model presented provides a flexible tool for comparing different branch strategies in terms of the reduction it offers in average branch delay and also in terms of the associated cost of wasted instruction fetches. This additional criterion turns out to be a valuable consideration in choosing between two strategies that perform almost equally. More importantly, it provides a better insight into the expected overall system performance. Simple compiler-support-based low-implementation-cost strategies can be very effective under certain conditions. An active branch prediction scheme based on loop buffers can be as competitive as a branch-target-buffer based strategy.

  7. Bellman’s GAP—a language and compiler for dynamic programming in sequence analysis

    PubMed Central

    Sauthoff, Georg; Möhl, Mathias; Janssen, Stefan; Giegerich, Robert

    2013-01-01

    Motivation: Dynamic programming is ubiquitous in bioinformatics. Developing and implementing non-trivial dynamic programming algorithms is often error prone and tedious. Bellman’s GAP is a new programming system, designed to ease the development of bioinformatics tools based on the dynamic programming technique. Results: In Bellman’s GAP, dynamic programming algorithms are described in a declarative style by tree grammars, evaluation algebras and products formed thereof. This bypasses the design of explicit dynamic programming recurrences and yields programs that are free of subscript errors, modular and easy to modify. The declarative modules are compiled into C++ code that is competitive to carefully hand-crafted implementations. This article introduces the Bellman’s GAP system and its language, GAP-L. It then demonstrates the ease of development and the degree of re-use by creating variants of two common bioinformatics algorithms. Finally, it evaluates Bellman’s GAP as an implementation platform of ‘real-world’ bioinformatics tools. Availability: Bellman’s GAP is available under GPL license from http://bibiserv.cebitec.uni-bielefeld.de/bellmansgap. This Web site includes a repository of re-usable modules for RNA folding based on thermodynamics. Contact: robert@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online PMID:23355290

  8. Optimizations for parallel object oriented frameworks

    SciTech Connect

    Basetti, F; Davis, K; Quinlan, D

    1998-09-22

    Application codes reliably under perform the advertised performance of existing architectures, compilers have only limited mechanisms with which to effect sophisticated transformations to arrest this trend. Compilers are forced to work within the broad semantics of the complete language specification and thus can not guarantee correctness of more sophisticated transformations. Object-oriented frameworks provide a level of tailoring of the C++ language to specific, albeit often restricted contexts. But such frameworks traditionally rely upon the compiler for most performance level optimization, often with disappointing results since the compiler must work within the context of the full language rather than the restricted semantics of abstractions introduced within the class library. No mechanism exists to express the restricted semantics of a class library to the compiler and effect correspondingly more sophisticated optimizations. In this paper, the authors explore both a family of transformations/optimizations appropriate to object-oriented frameworks for scientific computing and present a preprocessor mechanism, ROSE, which delivers the more sophisticated transformations automatically from the use of abstractions represented within high level object-oriented frameworks. They have found that these optimizations permit improved performance over FORTRAN 77 by factors of three to four, sufficiently interesting to suggest that higher level abstractions can contain greater semantics and that the greater semantics can be used to drive more sophisticated optimizations than are possible within lower level languages.

  9. Memory management and compiler support for rapid recovery from failures in computer systems

    NASA Technical Reports Server (NTRS)

    Fuchs, W. K.

    1991-01-01

    This paper describes recent developments in the use of memory management and compiler technology to support rapid recovery from failures in computer systems. The techniques described include cache coherence protocols for user transparent checkpointing in multiprocessor systems, compiler-based checkpoint placement, compiler-based code modification for multiple instruction retry, and forward recovery in distributed systems utilizing optimistic execution.

  10. Interpretation, compilation and field verification procedures in the CARETS project

    USGS Publications Warehouse

    Alexander, Robert H.; De Forth, Peter W.; Fitzpatrick, Katherine A.; Lins, Harry F.; McGinty, Herbert K.

    1975-01-01

    The production of the CARETS map data base involved the development of a series of procedures for interpreting, compiling, and verifying data obtained from remote sensor sources. Level II land use mapping from high-altitude aircraft photography at a scale of 1:100,000 required production of a photomosaic mapping base for each of the 48, 50 x 50 km sheets, and the interpretation and coding of land use polygons on drafting film overlays. CARETS researchers also produced a series of 1970 to 1972 land use change overlays, using the 1970 land use maps and 1972 high-altitude aircraft photography. To enhance the value of the land use sheets, researchers compiled series of overlays showing cultural features, county boundaries and census tracts, surface geology, and drainage basins. In producing Level I land use maps from Landsat imagery, at a scale of 1:250,000, interpreters overlaid drafting film directly on Landsat color composite transparencies and interpreted on the film. They found that such interpretation involves pattern and spectral signature recognition. In studies using Landsat imagery, interpreters identified numerous areas of change but also identified extensive areas of "false change," where Landsat spectral signatures but not land use had changed.

  11. An empirical study of Fortran programs for parallelizing compilers

    SciTech Connect

    Shen, Z. ); Li, Z. ); Yew, P.C. . Center for Supercomputing Research and Development)

    1990-07-01

    In this paper, the authors report some results from an empirical study of program characteristics that are important to parallelizing compiler writers, especially in the area of data dependence analysis and program transformations. The state of the art in data dependence analysis and some parallel execution techniques are also examined. The major findings include: many subscripts contain symbolic terms with unknown values. A few methods to determine their values at compile time are evaluated; array references with coupled subscripts appear quite frequently. These subscripts must be handled simultaneously in a dependence test, rather than being handled separately as in current test algorithms; nonzero coefficients of loop indexes in most subscripts are found to be simple: they are either 1 or -1. It allows an exact real-valued test to be as accurate as an exact integer-valued test for one-dimensional or two-dimensional arrays; dependences with uncertain distance are found to be rather common, and one of the main reasons is the frequent appearance of symbolic terms with unknown values.

  12. Creep of water ices at planetary conditions: A compilation

    USGS Publications Warehouse

    Durham, W.B.; Kirby, S.H.; Stern, L.A.

    1997-01-01

    Many constitutive laws for the flow of ice have been published since the advent of the Voyager explorations of the outer solar system. Conflicting data have occasionally come from different laboratories, and refinement of experimental techniques has led to the publication of laws that supersede earlier ones. In addition, there are unpublished data from ongoing research that also amend the constitutive laws. Here we compile the most current laboratory-derived flow laws for water ice phases I, II, III, V, and VI, and ice I mixtures with hard particulates. The rheology of interest is mainly that of steady state, and the conditions reviewed are the pressures and temperatures applicable to the surfaces and interiors of icy moons of the outer solar system. Advances in grain-size-dependent creep in ices I and II as well as in phase transformations and metastability under differential stress are also included in this compilation. At laboratory strain rates the several ice polymorphs are rheologically distinct in terms of their stress, temperature, and pressure dependencies but, with the exception of ice III, have fairly similar strengths. Hard particulates strengthen ice I significantly only at high particulate volume fractions. Ice III has the potential for significantly affecting mantle dynamics because it is much weaker than the other polymorphs and its region of stability, which may extend metastably well into what is nominally the ice II field, is located near likely geotherms of large icy moons. Copyright 1997 by the American Geophysical Union.

  13. Compilation of DNA sequences of Escherichia coli (update 1993).

    PubMed Central

    Kröger, M; Wahl, R; Rice, P

    1993-01-01

    We have compiled the DNA sequence data for E. coli available from the GENBANK and EMBL data libraries and over a period of several years independently from the literature. This is the fifth listing replacing and increasing the former listings substantially. However, in order to save space this printed version contains DNA sequence information only, if they are publically available in electronic form. The complete compilation including a full set of genetic map data and the E. coli protein index can be obtained in machine readable form from the EMBL data library (ECD release 15) as a part of the CD-ROM issue of the EMBL sequence database, released and updated every three months. After deletion of all detected overlaps a total of 2,353,635 individual bp is found to be determined till the end of April 1993. This corresponds to a total of 49.87% of the entire E. coli chromosome consisting of about 4,720 kbp. This number may actually be higher by 9161 bp derived from other strains of E. coli. PMID:8332520

  14. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  15. Compilation of DNA sequences of Escherichia coli (update 1992)

    PubMed Central

    Kröger, Manfred; Wahl, Ralf; Schachtel, Gabriel; Rice, Peter

    1992-01-01

    We have compiled the DNA sequence data for E.coli available from the GENBANK and EMBL data libraries and over a period of several years independently from the literature. This is the fourth listing replacing and increasing the former listings substantially. However, in order to save space this printed version contains DNA sequence information only, if they are publically available in electronic form. The complete compilation including a full set of genetic map data and the E.coli protein index can be obtained in machine readable form from the EMBL data library (ECD release 10) or from the CD-ROM version of this supplement issue directly. After deletion of all detected overlaps a total of 1 820 237 individual bp is found to be determined till the beginning of 1992. This corresponds to a total of 38.56% of the entire E.coli chromosome consisting of about 4,720 kbp. This number may actually be higher by some extra 2,5% derived from lysogenic bacteriophage lambda and various DNA sequences already received for other strains of E.coli. PMID:1598239

  16. An empirical study of FORTRAN programs for parallelizing compilers

    NASA Technical Reports Server (NTRS)

    Shen, Zhiyu; Li, Zhiyuan; Yew, Pen-Chung

    1990-01-01

    Some results are reported from an empirical study of program characteristics that are important in parallelizing compiler writers, especially in the area of data dependence analysis and program transformations. The state of the art in data dependence analysis and some parallel execution techniques are examined. The major findings are included. Many subscripts contain symbolic terms with unknown values. A few methods of determining their values at compile time are evaluated. Array references with coupled subscripts appear quite frequently; these subscripts must be handled simultaneously in a dependence test, rather than being handled separately as in current test algorithms. Nonzero coefficients of loop indexes in most subscripts are found to be simple: they are either 1 or -1. This allows an exact real-valued test to be as accurate as an exact integer-valued test for one-dimensional or two-dimensional arrays. Dependencies with uncertain distance are found to be rather common, and one of the main reasons is the frequent appearance of symbolic terms with unknown values.

  17. National Energy Strategy: A compilation of public comments; Interim Report

    SciTech Connect

    Not Available

    1990-04-01

    This Report presents a compilation of what the American people themselves had to say about problems, prospects, and preferences in energy. The Report draws on the National Energy Strategy public hearing record and accompanying documents. In all, 379 witnesses appeared at the hearings to exchange views with the Secretary, Deputy Secretary, and Deputy Under Secretary of Energy, and Cabinet officers of other Federal agencies. Written submissions came from more than 1,000 individuals and organizations. Transcripts of the oral testimony and question-and-answer (Q-and-A) sessions, as well as prepared statements submitted for the record and all other written submissions, form the basis for this compilation. Citations of these sources in this document use a system of identifying symbols explained below and in the accompanying box. The Report is organized into four general subject areas concerning: (1) efficiency in energy use, (2) the various forms of energy supply, (3) energy and the environment, and (4) the underlying foundations of science, education, and technology transfer. Each of these, in turn, is subdivided into sections addressing specific topics --- such as (in the case of energy efficiency) energy use in the transportation, residential, commercial, and industrial sectors, respectively. 416 refs., 44 figs., 5 tabs.

  18. Designing of High-Volume PET/CT Facility with Optimal Reduction of Radiation Exposure to the Staff: Implementation and Optimization in a Tertiary Health Care Facility in India

    PubMed Central

    Jha, Ashish Kumar; Singh, Abhijith Mohan; Mithun, Sneha; Shah, Sneha; Agrawal, Archi; Purandare, Nilendu C.; Shetye, Bhakti; Rangarajan, Venkatesh

    2015-01-01

    Positron emission tomography (PET) has been in use for a few decades but with its fusion with computed tomography (CT) in 2001, the new PET/CT integrated system has become very popular and is now a key influential modality for patient management in oncology. However, along with its growing popularity, a growing concern of radiation safety among the radiation professionals has become evident. We have judiciously developed a PET/CT facility with optimal shielding, along with an efficient workflow to perform high volume procedures and minimize the radiation exposure to the staff and the general public by reducing unnecessary patient proximity to the staff and general public. PMID:26420990

  19. National Land Use Policy: Objectives, Components, Implementation.

    ERIC Educational Resources Information Center

    Soil Conservation Society of America, Ankeny, IA.

    Proceedings of a special conference sponsored by the Soil Conservation Society of America, are compiled in this report. The conference served as a forum for those involved in land use planning and implementation at all levels of government and private enterprise. Comments were directed to four main topics: (1) Objectives and Need for a National…

  20. Co-optimizing Generation and Transmission Expansion with Wind Power in Large-Scale Power Grids Implementation in the US Eastern Interconnection

    DOE PAGES

    You, Shutang; Hadley, Stanton W.; Shankar, Mallikarjun; Liu, Yilu

    2016-01-12

    This paper studies the generation and transmission expansion co-optimization problem with a high wind power penetration rate in the US Eastern Interconnection (EI) power grid. In this paper, the generation and transmission expansion problem for the EI system is modeled as a mixed-integer programming (MIP) problem. Our paper also analyzed a time series generation method to capture the variation and correlation of both load and wind power across regions. The obtained series can be easily introduced into the expansion planning problem and then solved through existing MIP solvers. Simulation results show that the proposed planning model and series generation methodmore » can improve the expansion result significantly through modeling more detailed information of wind and load variation among regions in the US EI system. Moreover, the improved expansion plan that combines generation and transmission will aid system planners and policy makers to maximize the social welfare in large-scale power grids.« less

  1. Harmonic analysis and FPGA implementation of SHE controlled three phase CHB 11-level inverter in MV drives using deterministic and stochastic optimization techniques.

    PubMed

    Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu

    2013-01-01

    With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too. PMID:24010030

  2. Harmonic analysis and FPGA implementation of SHE controlled three phase CHB 11-level inverter in MV drives using deterministic and stochastic optimization techniques.

    PubMed

    Vesapogu, Joshi Manohar; Peddakotla, Sujatha; Kuppa, Seetha Rama Anjaneyulu

    2013-01-01

    With the advancements in semiconductor technology, high power medium voltage (MV) Drives are extensively used in numerous industrial applications. Challenging technical requirements of MV Drives is to control multilevel inverter (MLI) with less Total harmonic distortion (%THD) which satisfies IEEE standard 519-1992 harmonic guidelines and less switching losses. Among all modulation control strategies for MLI, Selective harmonic elimination (SHE) technique is one of the traditionally preferred modulation control technique at fundamental switching frequency with better harmonic profile. On the other hand, the equations which are formed by SHE technique are highly non-linear in nature, may exist multiple, single or even no solution at particular modulation index (MI). However, in some MV Drive applications, it is required to operate over a range of MI. Providing analytical solutions for SHE equations during the whole range of MI from 0 to 1, has been a challenging task for researchers. In this paper, an attempt is made to solve SHE equations by using deterministic and stochastic optimization methods and comparative harmonic analysis has been carried out. An effective algorithm which minimizes %THD with less computational effort among all optimization algorithms has been presented. To validate the effectiveness of proposed MPSO technique, an experiment is carried out on a low power proto type of three phase CHB 11- level Inverter using FPGA based Xilinx's Spartan -3A DSP Controller. The experimental results proved that MPSO technique has successfully solved SHE equations over all range of MI from 0 to 1, the %THD obtained over major range of MI also satisfies IEEE 519-1992 harmonic guidelines too.

  3. Design and implementation of three-dimensional ring-scanning equipment for optimized measurements of near-infrared diffuse optical breast imaging

    NASA Astrophysics Data System (ADS)

    Yu, Jhao-Ming; Pan, Min-Cheng; Hsu, Ya-Fen; Chen, Liang-Yu; Pan, Min-Chun

    2015-07-01

    We propose and implement three-dimensional (3-D) ring-scanning equipment for near-infrared (NIR) diffuse optical imaging to screen breast tumors under prostrating examination. This equipment has the function of the radial, circular, and vertical motion without compression of breast tissue, thereby achieving 3-D scanning; furthermore, a flexible combination of illumination and detection can be configured for the required resolution. Especially, a rotation-sliding-and-moving mechanism was designed for the guidance of source- and detection-channel motion. Prior to machining and construction of the system, a synthesized image reconstruction was simulated to show the feasibility of this 3-D NIR ring-scanning equipment; finally, this equipment is verified by performing phantom experiments. Rather than the fixed configuration, this addressed screening/diagnosing equipment has the flexibilities of optical-channel expansion for spatial resolution and the dimensional freedom for scanning in reconstructing optical-property images.

  4. Modular implementation of a digital hardware design automation system

    NASA Astrophysics Data System (ADS)

    Masud, M.

    An automation system based on AHPL (A Hardware Programming Language) was developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) illustration of how the compiler may be used to support several phases of design activities. Several new features were added to AHPL. These include: application-dependent parameters, mutliple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1)BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems.

  5. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  6. Biogeochemical Controls on Mercury Methylation: A compilation of Data Across Fresh and Saltwater Wetlands

    NASA Astrophysics Data System (ADS)

    Gilmour, C.; Heyes, A.; Mitchell, C.; Krabbenhoft, D.; Orem, W.; Aiken, G.; Mason, R.

    2007-12-01

    Over the past decade, we have examined the biogeochemical controls on net methylmercury production across a number of wetland ecosystems, including salt marshes in Chesapeake Bay, the freshwater and estuarine Everglades, and a variety of boreal freshwater wetlands in Ontario. The balance between sulfate and sulfide is key for understanding Hg methylation rates among these ecosystems. Sulfate stimulates Hg-methylating sulfate- reducing bacteria (SRB) while sulfide creates charged mercury-sulfide complexes that are unavailable for uptake by SRB. Sulfate-stimulation of methylation has been demonstrated in experimental studies that range from pure culture, to sediment and soil amendments, to large-scale field additions. Stimulation of methylation by sulfate has also been demonstrated in freshwater ecosystems impacted by sulfur pollution derived from atmospheric deposition, agriculture and mining. This presentation will present a compilation of field and laboratory studies on the impact of sulfate and sulfide on MeHg production to create a simple, general model for the control of net Hg methylation in surfaces sediments and wetland soils that includes microbial activity (sulfate reduction rate), dissolved sulfide, dissolved organic matter and and soil organic matter. In particular, the model focuses on the balance between sulfate and sulfide, and the optimal concentrations of each for methylation across studies and ecosystems. Data to be presented will include new information from high sulfate and sulfide coastal ecosystems in Chesapeake Bay. Optimal sulfate concentrations for methylation appear to range widely among ecosystems, while the optimal sulfide concentrations are more constant, and often quite low, often in the low micromolar range. However, recent data from estuarine and marine systems suggest that net methylation can proceed at somewhat higher sulfide concentrations when microbial activity is particularly high. By compiling these data, we can begin to

  7. Global Seismicity: Three New Maps Compiled with Geographic Information Systems

    NASA Technical Reports Server (NTRS)

    Lowman, Paul D., Jr.; Montgomery, Brian C.

    1996-01-01

    This paper presents three new maps of global seismicity compiled from NOAA digital data, covering the interval 1963-1998, with three different magnitude ranges (mb): greater than 3.5, less than 3.5, and all detectable magnitudes. A commercially available geographic information system (GIS) was used as the database manager. Epicenter locations were acquired from a CD-ROM supplied by the National Geophysical Data Center. A methodology is presented that can be followed by general users. The implications of the maps are discussed, including the limitations of conventional plate models, and the different tectonic behavior of continental vs. oceanic lithosphere. Several little-known areas of intraplate or passive margin seismicity are also discussed, possibly expressing horizontal compression generated by ridge push.

  8. COMPILATION OF LABORATORY SCALE ALUMINUM WASH AND LEACH REPORT RESULTS

    SciTech Connect

    HARRINGTON SJ

    2011-01-06

    This report compiles and analyzes all known wash and caustic leach laboratory studies. As further data is produced, this report will be updated. Included are aluminum mineralogical analysis results as well as a summation of the wash and leach procedures and results. Of the 177 underground storage tanks at Hanford, information was only available for five individual double-shell tanks, forty-one individual single-shell tanks (e.g. thirty-nine 100 series and two 200 series tanks), and twelve grouped tank wastes. Seven of the individual single-shell tank studies provided data for the percent of aluminum removal as a function of time for various caustic concentrations and leaching temperatures. It was determined that in most cases increased leaching temperature, caustic concentration, and leaching time leads to increased dissolution of leachable aluminum solids.

  9. A Conformance Test Suite for Arden Syntax Compilers and Interpreters.

    PubMed

    Wolf, Klaus-Hendrik; Klimek, Mike

    2016-01-01

    The Arden Syntax for Medical Logic Modules is a standardized and well-established programming language to represent medical knowledge. To test the compliance level of existing compilers and interpreters no public test suite exists. This paper presents the research to transform the specification into a set of unit tests, represented in JUnit. It further reports on the utilization of the test suite testing four different Arden Syntax processors. The presented and compared results reveal the status conformance of the tested processors. How test driven development of Arden Syntax processors can help increasing the compliance with the standard is described with two examples. In the end some considerations how an open source test suite can improve the development and distribution of the Arden Syntax are presented. PMID:27577408

  10. A compilation of charged-particle induced thermonuclear reaction rates

    NASA Astrophysics Data System (ADS)

    Angulo, C.; Arnould, M.; Rayet, M.; Descouvemont, P.; Baye, D.; Leclercq-Willain, C.; Coc, A.; Barhoumi, S.; Aguer, P.; Rolfs, C.; Kunz, R.; Hammer, J. W.; Mayer, A.; Paradellis, T.; Kossionides, S.; Chronidou, C.; Spyrou, K.; degl'Innocenti, S.; Fiorentini, G.; Ricci, B.; Zavatarelli, S.; Providencia, C.; Wolters, H.; Soares, J.; Grama, C.; Rahighi, J.; Shotter, A.; Lamehi Rachti, M.

    1999-08-01

    Low-energy cross section data for 86 charged-particle induced reactions involving light (1 <=Z <=14), mostly stable, nuclei are compiled. The corresponding Maxwellian-averaged thermonuclear reaction rates of relevance in astrophysical plasmas at temperatures in the range from 106 K to 1010 K are calculated. These evaluations assume either that the target nuclei are in their ground state, or that the target states are thermally populated following a Maxwell-Boltzmann distribution, except in some cases involving isomeric states. Adopted values complemented with lower and upper limits of the rates are presented in tabular form. Analytical approximations to the adopted rates, as well as to the inverse/direct rate ratios, are provided.

  11. Compiler-Enhanced Incremental Checkpointing for OpenMP Applications

    SciTech Connect

    Bronevetsky, G; Marques, D; Pingali, K; Rugina, R; McKee, S A

    2008-01-21

    As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, reduces checkpoint sizes by as much as 80% and enables asynchronous checkpointing.

  12. Compiler-Enhanced Incremental Checkpointing for OpenMP Applications

    SciTech Connect

    Bronevetsky, G; Marques, D; Pingali, K; McKee, S; Rugina, R

    2009-02-18

    As modern supercomputing systems reach the peta-flop performance range, they grow in both size and complexity. This makes them increasingly vulnerable to failures from a variety of causes. Checkpointing is a popular technique for tolerating such failures, enabling applications to periodically save their state and restart computation after a failure. Although a variety of automated system-level checkpointing solutions are currently available to HPC users, manual application-level checkpointing remains more popular due to its superior performance. This paper improves performance of automated checkpointing via a compiler analysis for incremental checkpointing. This analysis, which works with both sequential and OpenMP applications, significantly reduces checkpoint sizes and enables asynchronous checkpointing.

  13. Compilation of Sandia coal char combustion data and kinetic analyses

    SciTech Connect

    Mitchell, R.E.; Hurt, R.H.; Baxter, L.L.; Hardesty, D.R.

    1992-06-01

    An experimental project was undertaken to characterize the physical and chemical processes that govern the combustion of pulverized coal chars. The experimental endeavor establishes a database on the reactivities of coal chars as a function of coal type, particle size, particle temperature, gas temperature, and gas and composition. The project also provides a better understanding of the mechanism of char oxidation, and yields quantitative information on the release rates of nitrogen- and sulfur-containing species during char combustion. An accurate predictive engineering model of the overall char combustion process under technologically relevant conditions in a primary product of this experimental effort. This document summarizes the experimental effort, the approach used to analyze the data, and individual compilations of data and kinetic analyses for each of the parent coals investigates.

  14. Database compilation: hydrology of Lake Tiberias (Jordan Valley)

    NASA Astrophysics Data System (ADS)

    Shentsis, Izabela; Rosenthal, Eliyahu; Magri, Fabien

    2014-05-01

    A long-term series of water balance data over the last 50 years is compiled to gain insights into the hydrology of the Lake Tiberias (LT) and surrounding aquifers. This database is used within the framework of a German-Israeli-Jordanian project (DFG Ma4450-2) in which numerical modeling is applied to study the mechanisms of deep fluid transport processes affecting the Tiberias basin. The LT is the largest natural freshwater lake in Israel. It is located in the northern part of the Dead Sea Rift. The behavior of the lake level is a result of the regional water balance caused mainly by interaction of two factors: (i) fluctuations of water inflow to the Lake, (ii) water exploitation in the adjacent aquifers and consumptions from the lake (pumping, diversion, etc). The replenishment of the lake occurs through drainage from surrounding mountains (Galilee, Golan Heights), entering the lake through the Jordan River and secondary streams (85%), direct precipitation (11%), fresh-saline springs discharging along the shoreline, divertion from Yarmouk river and internal springs and seeps. The major losses occur through the National Water Carrier (ca. 44%), evaporation (38%), local consumption and compensation to Jordan (in sum 12%). In spite of the increasing role of water exploitation, the natural inflow to the Lake remains the dominant factor of hydrological regime of the Tiberias Lake. Additionally, series of natural yield to the LT are reconstructed with precipitation data measured in the Tiberias basin (1922-2012). The earlier period (1877-1921) is evaluated considering long rainfall records at Beirut and Nazareth stations (Middle East Region). This data enables to use the LT yield as a complex indicator of the regional climate change. Though the data applies to the LT, this example shows the importance of large database. Their compilation defines the correct set-up of joint methodologies such as numerical modeling and hydrochemical analyses aimed to understand large

  15. Southwest Indian Ocean Bathymetric Compilation (swIOBC)

    NASA Astrophysics Data System (ADS)

    Jensen, L.; Dorschel, B.; Arndt, J. E.; Jokat, W.

    2014-12-01

    As result of long-term scientific activities in the southwest Indian Ocean, an extensive amount of swath bathymetric data has accumulated in the AWI database. Using this data as a backbone, supplemented by additional bathymetric data sets and predicted bathymetry, we generate a comprehensive regional bathymetric data compilation for the southwest Indian Ocean. A high resolution bathymetric chart of this region will support geological and climate research: Identification of current-induced seabed structures will help modelling oceanic currents and, thus, provide proxy information about the paleo-climate. Analysis of the sediment distribution will contribute to reconstruct the erosional history of Eastern Africa. The aim of swIOBC is to produce a homogeneous and seamless bathymetric grid with an associated meta-database and a corresponding map for the area from 5° to 39° S and 20° to 44° E. Recently, multibeam data with a track length of approximately 86,000 km are held in-house. In combination with external echosounding data this allows for the generation of a regional grid, significantly improving the existing, mostly satellite altimetry derived, bathymetric models. The collected data sets are heterogeneous in terms of age, acquisition system, background data, resolution, accuracy, and documentation. As a consequence, the production of a bathymetric grid requires special techniques and algorithms, which were already developed for the IBCAO (Jakobsson et al., 2012) and further refined for the IBCSO (Arndt et al., 2013). The new regional southwest Indian Ocean chart will be created based on these methods. Arndt, J.E., et al., 2013. The International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0—A new bathymetric compilation covering circum-Antarctic waters. GRL 40, 1-7, doi: 10.1002/grl.50413, 2013. Jakobsson, M., et al., 2012. The International Bathymetric Chart of the Arctic Ocean (IBCAO) Version 3.0. GRL 39, L12609, doi: 10.1029/2012GL052219.

  16. How to achieve performance portable code using OpenACC compiler directives?

    NASA Astrophysics Data System (ADS)

    Lapillonne, Xavier; Fuhrer, Oliver

    2014-05-01

    In view of adapting the weather and climate model COSMO to future architectures a new version of the model capable of running on graphics processing units (GPUs) has been developed. A large part of the code has been ported using compiler directives based on the OpenACC programming model. In order to achieve the best performance on GPUs several optimizations have been introduced for time critical components, mostly in the so-called physical parameterizations. Some of these modifications unfortunately degrade performance on traditional CPUs. Being a large community code, the COSMO model is required to perform well on both hybrid and CPU-only supercomputers. The current practical solution is to have separate source files for GPU and CPU execution, which may in the long-term result in maintenance issues. Considering the physical parameterization responsible for the atmospheric radiative transfer computations, we first present the restructuring techniques necessary to achieve performance on the GPU. We then show that some parts of the code are compute bound on the CPU while memory bound limited on the GPU, leading to different requirements in terms of optimization. We finally discuss various solutions to achieve a portable and maintainable code, both in terms of possible improvement of the OpenACC standard or in terms of programming strategy.

  17. Compiling probabilistic, bio-inspired circuits on a field programmable analog array.

    PubMed

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A FIELD PROGRAMMABLE ANALOG ARRAY (FPAA) IS PRESENTED AS AN ENERGY AND COMPUTATIONAL EFFICIENCY ENGINE: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  18. Compiling probabilistic, bio-inspired circuits on a field programmable analog array

    PubMed Central

    Marr, Bo; Hasler, Jennifer

    2014-01-01

    A field programmable analog array (FPAA) is presented as an energy and computational efficiency engine: a mixed mode processor for which functions can be compiled at significantly less energy costs using probabilistic computing circuits. More specifically, it will be shown that the core computation of any dynamical system can be computed on the FPAA at significantly less energy per operation than a digital implementation. A stochastic system that is dynamically controllable via voltage controlled amplifier and comparator thresholds is implemented, which computes Bernoulli random variables. From Bernoulli variables it is shown exponentially distributed random variables, and random variables of an arbitrary distribution can be computed. The Gillespie algorithm is simulated to show the utility of this system by calculating the trajectory of a biological system computed stochastically with this probabilistic hardware where over a 127X performance improvement over current software approaches is shown. The relevance of this approach is extended to any dynamical system. The initial circuits and ideas for this work were generated at the 2008 Telluride Neuromorphic Workshop. PMID:24847199

  19. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    SciTech Connect

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh; Manzano Franco, Joseph B.; Tumeo, Antonino

    2015-05-20

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) { on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.

  20. Library for Nonlinear Optimization

    2001-10-09

    OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.

  1. The development of a multi-target compiler-writing system for flight software development

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Donegan, M. K.

    1977-01-01

    A wide variety of systems designed to assist the user in the task of writing compilers has been developed. A survey of these systems reveals that none is entirely appropriate to the purposes of the MUST project, which involves the compilation of one or at most a small set of higher-order languages to a wide variety of target machines offering little or no software support. This requirement dictates that any compiler writing system employed must provide maximal support in the areas of semantics specification and code generation, the areas in which existing compiler writing systems as well as theoretical underpinnings are weakest. This paper describes an ongoing research and development effort to create a compiler writing system which will overcome these difficulties, thus providing a software system which makes possible the fast, trouble-free creation of reliable compilers for a wide variety of target computers.

  2. POET: Parameterized Optimization for Empirical Tuning

    SciTech Connect

    Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D

    2007-01-29

    The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.

  3. Compiler-Assisted Detection of Transient Memory Errors

    SciTech Connect

    Tavarageri, Sanket; Krishnamoorthy, Sriram; Sadayappan, Ponnuswamy

    2014-06-09

    The probability of bit flips in hardware memory systems is projected to increase significantly as memory systems continue to scale in size and complexity. Effective hardware-based error detection and correction requires that the complete data path, involving all parts of the memory system, be protected with sufficient redundancy. First, this may be costly to employ on commodity computing platforms and second, even on high-end systems, protection against multi-bit errors may be lacking. Therefore, augmenting hardware error detection schemes with software techniques is of consider- able interest. In this paper, we consider software-level mechanisms to comprehensively detect transient memory faults. We develop novel compile-time algorithms to instrument application programs with checksum computation codes so as to detect memory errors. Unlike prior approaches that employ checksums on computational and architectural state, our scheme verifies every data access and works by tracking variables as they are produced and consumed. Experimental evaluation demonstrates that the proposed comprehensive error detection solution is viable as a completely software-only scheme. We also demonstrate that with limited hardware support, overheads of error detection can be further reduced.

  4. OMPC: an Open-Source MATLAB-to-Python Compiler.

    PubMed

    Jurica, Peter; van Leeuwen, Cees

    2009-01-01

    Free access to scientific information facilitates scientific progress. Open-access scientific journals are a first step in this direction; a further step is to make auxiliary and supplementary materials that accompany scientific publications, such as methodological procedures and data-analysis tools, open and accessible to the scientific community. To this purpose it is instrumental to establish a software base, which will grow toward a comprehensive free and open-source language of technical and scientific computing. Endeavors in this direction are met with an important obstacle. MATLAB((R)), the predominant computation tool in many fields of research, is a closed-source commercial product. To facilitate the transition to an open computation platform, we propose Open-source MATLAB((R))-to-Python Compiler (OMPC), a platform that uses syntax adaptation and emulation to allow transparent import of existing MATLAB((R)) functions into Python programs. The imported MATLAB((R)) modules will run independently of MATLAB((R)), relying on Python's numerical and scientific libraries. Python offers a stable and mature open source platform that, in many respects, surpasses commonly used, expensive commercial closed source packages. The proposed software will therefore facilitate the transparent transition towards a free and general open-source lingua franca for scientific computation, while enabling access to the existing methods and algorithms of technical computing already available in MATLAB((R)). OMPC is available at http://ompc.juricap.com.

  5. Compilation of fission product yields Vallecitos Nuclear Center

    SciTech Connect

    Rider, B.F.

    1980-01-01

    This document is the ninth in a series of compilations of fission yield data made at Vallecitos Nuclear Center in which fission yield measurements reported in the open literature and calculated charge distributions have been utilized to produce a recommended set of yields for the known fission products. The original data with reference sources, as well as the recommended yields are presented in tabular form for the fissionable nuclides U-235, Pu-239, Pu-241, and U-233 at thermal neutron energies; for U-235, U-238, Pu-239, and Th-232 at fission spectrum energies; and U-235 and U-238 at 14 MeV. In addition, U-233, U-236, Pu-240, Pu-241, Pu-242, Np-237 at fission spectrum energies; U-233, Pu-239, Th-232 at 14 MeV and Cf-252 spontaneous fission are similarly treated. For 1979 U234F, U237F, Pu249H, U234He, U236He, Pu238F, Am241F, Am243F, Np238F, and Cm242F yields were evaluated. In 1980, Th227T, Th229T, Pa231F, Am241T, Am241H, Am242Mt, Cm245T, Cf249T, Cf251T, and Es254T are also evaluated.

  6. Archive Compiles New Resource for Global Tropical Cyclone Research

    NASA Astrophysics Data System (ADS)

    Knapp, Kenneth R.; Kruk, Michael C.; Levinson, David H.; Gibney, Ethan J.

    2009-02-01

    The International Best Track Archive for Climate Stewardship (IBTrACS) compiles tropical cyclone best track data from 11 tropical cyclone forecast centers around the globe, producing a unified global best track data set (M. C. Kruk et al., A technique for merging global tropical cyclone best track data, submitted to Journal of Atmospheric and Oceanic Technology, 2008). Best track data (so called because the data generally refer to the best estimate of a storm's characteristics) include the position, maximum sustained winds, and minimum central pressure of a tropical cyclone at 6-hour intervals. Despite the significant impact of tropical cyclones on society and natural systems, there had been no central repository maintained for global best track data prior to the development of IBTrACS in 2008. The data set, which builds upon the efforts of the international tropical forecasting community, has become the most comprehensive global best track data set publicly available. IBTrACS was created by the U.S. National Oceanic and Atmospheric Administration's National Climatic Data Center (NOAA NCDC) under the auspices of the World Data Center for Meteorology.

  7. Research Implementation.

    ERIC Educational Resources Information Center

    Trochim, William M. K.

    Investigated is the topic of research implementation and how it can affect evaluation results. Even when evaluations are well planned, the obtained results can be misleading if the conscientiously-constructed research plan is not correctly implemented in practice. In virtually every research arena, one finds major difficulties in implementing the…

  8. Analog neural nonderivative optimizers.

    PubMed

    Teixeira, M M; Zak, S H

    1998-01-01

    Continuous-time neural networks for solving convex nonlinear unconstrained programming problems without using gradient information of the objective function are proposed and analyzed. Thus, the proposed networks are nonderivative optimizers. First, networks for optimizing objective functions of one variable are discussed. Then, an existing one-dimensional optimizer is analyzed, and a new line search optimizer is proposed. It is shown that the proposed optimizer network is robust in the sense that it has disturbance rejection property. The network can be implemented easily in hardware using standard circuit elements. The one-dimensional net is used as a building block in multidimensional networks for optimizing objective functions of several variables. The multidimensional nets implement a continuous version of the coordinate descent method.

  9. BioMercator V3: an upgrade of genetic map compilation and quantitative trait loci meta-analysis algorithms

    PubMed Central

    Sosnowski, Olivier; Charcosset, Alain; Joets, Johann

    2012-01-01

    Summary: Compilation of genetic maps combined to quantitative trait loci (QTL) meta-analysis has proven to be a powerful approach contributing to the identification of candidate genes underlying quantitative traits. BioMercator was the first software offering a complete set of algorithms and visualization tool covering all steps required to perform QTL meta-analysis. Despite several limitations, the software is still widely used. We developed a new version proposing additional up to date methods and improving graphical representation and exploration of large datasets. Availability and implementation: BioMercator V3 is implemented in JAVA and freely available (http://moulon.inra.fr/biomercator) Contact: joets@moulon.inra.fr PMID:22661647

  10. An Atmospheric General Circulation Model with Chemistry for the CRAY T3E: Design, Performance Optimization and Coupling to an Ocean Model

    NASA Technical Reports Server (NTRS)

    Farrara, John D.; Drummond, Leroy A.; Mechoso, Carlos R.; Spahr, Joseph A.

    1998-01-01

    The design, implementation and performance optimization on the CRAY T3E of an atmospheric general circulation model (AGCM) which includes the transport of, and chemical reactions among, an arbitrary number of constituents is reviewed. The parallel implementation is based on a two-dimensional (longitude and latitude) data domain decomposition. Initial optimization efforts centered on minimizing the impact of substantial static and weakly-dynamic load imbalances among processors through load redistribution schemes. Recent optimization efforts have centered on single-node optimization. Strategies employed include loop unrolling, both manually and through the compiler, the use of an optimized assembler-code library for special function calls, and restructuring of parts of the code to improve data locality. Data exchanges and synchronizations involved in coupling different data-distributed models can account for a significant fraction of the running time. Therefore, the required scattering and gathering of data must be optimized. In systems such as the T3E, there is much more aggregate bandwidth in the total system than in any particular processor. This suggests a distributed design. The design and implementation of a such distributed 'Data Broker' as a means to efficiently couple the components of our climate system model is described.

  11. Conceptual design optimization study

    NASA Technical Reports Server (NTRS)

    Hollowell, S. J.; Beeman, E. R., II; Hiyama, R. M.

    1990-01-01

    The feasibility of applying multilevel functional decomposition and optimization techniques to conceptual design of advanced fighter aircraft was investigated. Applying the functional decomposition techniques to the conceptual design phase appears to be feasible. The initial implementation of the modified design process will optimize wing design variables. A hybrid approach, combining functional decomposition techniques for generation of aerodynamic and mass properties linear sensitivity derivatives with existing techniques for sizing mission performance and optimization, is proposed.

  12. Global compilation of coastline change at river mouths

    NASA Astrophysics Data System (ADS)

    Aadland, Tore; Helland-Hansen, William

    2016-04-01

    We are using Google Earth Engine to analyze Landsat images to create a global compilation of coastline change at river mouths in order to develop scaling relationships between catchment properties and shoreline behaviour. Our main motivation for doing this is to better understand the rates at which shallowing upward successions of deltaic successions are formed. We are also interested in getting an insight into the impact of climate change and human activity on modern shorelines. Google Earth Engine is a platform that offers simple selection of relevant data from an extensive catalog of geospatial data and the tools to analyse it efficiently. We have used Google Earth Engine to select and analyze temporally and geographically bounded sets of Landsat images covering modern deltas included in the Milliman and Farnsworth 2010 database. The part of the shoreline sampled for each delta has been manually defined. The areas depicted in these image sets have been classified as land or water by thresholding a calibrated Modified Normalized Water Index. By representing land and water as 1.0 and 0 respectively and averaging image sets of sufficient size we have generated rasters quantifying the probability of an area being classified as land. The calculated probabilities reflect variation in the shoreline position; in particular, it minimizes the impact of short term-variations produced by tides. The net change in the land area of deltas can be estimated by comparing how the probability changes between image sets spanning different time periods. We have estimated the land area change that occurred from 2000 to 2014 at more than 130 deltas with catchment areas ranging from 470 to 6300000 sqkm. Log-log plots of the land area change of these deltas against their respective catchment properties in the Milliman and Farnsworth 2010 database indicate that the rate of land area change correlates with catchment size and discharge. Useful interpretation of the data requires that we

  13. Evaluation and compilation of fission product yields 1993

    SciTech Connect

    England, T.R.; Rider, B.F.

    1995-12-31

    This document is the latest in a series of compilations of fission yield data. Fission yield measurements reported in the open literature and calculated charge distributions have been used to produce a recommended set of yields for the fission products. The original data with reference sources, and the recommended yields axe presented in tabular form. These include many nuclides which fission by neutrons at several energies. These energies include thermal energies (T), fission spectrum energies (F), 14 meV High Energy (H or HE), and spontaneous fission (S), in six sets of ten each. Set A includes U235T, U235F, U235HE, U238F, U238HE, Pu239T, Pu239F, Pu241T, U233T, Th232F. Set B includes U233F, U233HE, U236F, Pu239H, Pu240F, Pu241F, Pu242F, Th232H, Np237F, Cf252S. Set C includes U234F, U237F, Pu240H, U234HE, U236HE, Pu238F, Am241F, Am243F, Np238F, Cm242F. Set D includes Th227T, Th229T, Pa231F, Am241T, Am241H, Am242MT, Cm245T, Cf249T, Cf251T, Es254T. Set E includes Cf250S, Cm244S, Cm248S, Es253S, Fm254S, Fm255T, Fm256S, Np237H, U232T, U238S. Set F includes Cm243T, Cm246S, Cm243F, Cm244F, Cm246F, Cm248F, Pu242H, Np237T, Pu240T, and Pu242T to complete fission product yield evaluations for 60 fissioning systems in all. This report also serves as the primary documentation for the second evaluation of yields in ENDF/B-VI released in 1993.

  14. Compiling Mercury relief map using several data sources

    NASA Astrophysics Data System (ADS)

    Zakharova, M.

    2015-12-01

    There are several data of Mercury topography obtained as the result of processing materials collected by two spacecraft - the Mariner-10 and the MESSENGER during their Mercury flybys.The history of the visual mapping of Mercury begins at the recent times as the first significant observations were made during the latter half of the 20th century, whereas today we have no data with 100% coverage of the entire surface of the Mercury except the global mosaic composed of the images acquired by MESSENGER. The main objective of this work is to provide the first Mercury relief map using all the existing elevation data. The workflow included collecting, combining and processing the existing data and afterwards merging them correctly for one single map compiling. The preference was given to topography data while the global mosaic was used to fill the gaps where there was insufficient topography.The Mercury relief map has been created with the help of four different types of data: - global mosaic with 100% coverage of Mercury's surface created from Messenger orbital images (36% of the final map);- Digital Terrain Models obtained by the treating stereo images made during the Mariner 10's flybys (15% of the map) (Cook and Robinson, 2000);- Digital Terrain Models obtained from images acquired during the Messenger flybys (24% of the map) (F. Preusker et al., 2011);- the data sets produced by the MESSENGER Mercury Laser Altimeter (MLA) (25 % of the map).The final map is created in the Lambert azimuthal Equal area projection and has the scale 1:18 000 000. It represents two hemispheres - western and eastern which are separated by the zero meridian. It mainly shows the hypsometric features of the planet and craters with a diameter more than 200 kilometers.

  15. Assessment of the current status of basic nuclear data compilations

    SciTech Connect

    Riemer, R.L.

    1992-12-31

    The Panel on Basic Nuclear Data Compilations believes that it is important to provide the user with an evaluated nuclear database of the highest quality, dependability, and currency. It is also important that the evaluated nuclear data are easily accessible to the user. In the past the panel concentrated its concern on the cycle time for the publication of A-chain evaluations. However, the panel now recognizes that publication cycle time is no longer the appropriate goal. Sometime in the future, publication of the evaluated A-chains will evolve from the present hard-copy Nuclear Data Sheets on library shelves to purely electronic publication, with the advent of universal access to terminals and the nuclear databases. Therefore, the literature cut-off date in the Evaluated Nuclear Structure Data File (ENSDF) is rapidly becoming the only important measure of the currency of an evaluated A-chain. Also, it has become exceedingly important to ensure that access to the databases is as user-friendly as possible and to enable electronic publication of the evaluated data files. Considerable progress has been made in these areas: use of the on-line systems has almost doubled in the past year, and there has been initial development of tools for electronic evaluation, publication, and dissemination. Currently, the nuclear data effort is in transition between the traditional and future methods of dissemination of the evaluated data. Also, many of the factors that adversely affect the publication cycle time simultaneously affect the currency of the evaluated nuclear database. Therefore, the panel continues to examine factors that can influence cycle time: the number of evaluators, the frequency with which an evaluation can be updated, the review of the evaluation, and the production of the evaluation, which currently exists as a hard-copy issue of Nuclear Data Sheets.

  16. Magnetic mapping of the western Alps: compilation and geological implications

    NASA Astrophysics Data System (ADS)

    Mouge, Pascal; Galdeano, Armand

    1991-05-01

    Extracted from large surveys of France, Italy and Switzerland, airborne magnetic data covering the western Alpine Arc have been compiled into a single homogeneous map of magnetic anomalies at the constant altitude of 3000 m. For this purpose, each data set has been revised thoroughly and accurately to give a single coherent large-scale pattern. The magnetic contour map reflects the anomaly pattern over the entire length of the Western Alpine collision suture. The distribution of polarities exhibits a large anomalous low located by reduction to the pole over the whole external part of the belt. The observed anomaly suggests a large gap of magnetization between the Adriatic microplate and the European crust. The analysis of the waveband shows that the broadest wavelengths are produced in the lower crust close to the transition zone, in the granulite facies. This highly magnetic layer is used as a marker to describe the geometry of the European and Adriatic deep seated crust. The main results are presented on a composite synthetic profile showing the sloping side of the European slab and an important crustal thinning to the southeast of the Adriatic slab. This feature is emphasized on the magnetic contour map by a linear magnetic low attributed to major transcurrent fault. This trend is called the Sestri-Voghera trend and extends from the Ligurian basin by the Sestri-Voltaggio Zone to the Judicarian system. Sinistral movements can be recognized along the whole axis as well as possible uplift of rift shoulders. The magnetic anomaly pattern over the complete length of the anomalous body of Ivrea as well as the Insubric-Canavese Line limit the extension of the Adriatic microplate by a well defined linear trend. The symmetrical shears deduced from consecutive anomalies are used to propose a structural scheme.

  17. 12 CFR 203.4 - Compilation of loan data.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... updates). (12)(i) For originated loans subject to Regulation Z, 12 CFR part 226, the difference between... implemented in Regulation Z (12 CFR 226.32). (14) The lien status of the loan or application (first lien... as of the date the interest rate is set, if that difference is equal to or greater than...

  18. 12 CFR 203.4 - Compilation of loan data.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... updates). (12)(i) For originated loans subject to Regulation Z, 12 CFR part 226, the difference between... implemented in Regulation Z (12 CFR 226.32). (14) The lien status of the loan or application (first lien... as of the date the interest rate is set, if that difference is equal to or greater than...

  19. 12 CFR 203.4 - Compilation of loan data.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ethnicity, race, and sex of the applicant or borrower, and the gross annual income relied on in processing... updates). (12)(i) For originated loans subject to Regulation Z, 12 CFR part 226, the difference between... implemented in Regulation Z (12 CFR 226.32). (14) The lien status of the loan or application (first...

  20. Compiling a Comprehensive EVA Training Dataset for NASA Astronauts

    NASA Technical Reports Server (NTRS)

    Laughlin, M. S.; Murray, J. D.; Lee, L. R.; Wear, M. L.; Van Baalen, M.

    2016-01-01

    Training for a spacewalk or extravehicular activity (EVA) is considered a hazardous duty for NASA astronauts. This places astronauts at risk for decompression sickness as well as various musculoskeletal disorders from working in the spacesuit. As a result, the operational and research communities over the years have requested access to EVA training data to supplement their studies. The purpose of this paper is to document the comprehensive EVA training data set that was compiled from multiple sources by the Lifetime Surveillance of Astronaut Health (LSAH) epidemiologists to investigate musculoskeletal injuries. The EVA training dataset does not contain any medical data, rather it only documents when EVA training was performed, by whom and other details about the session. The first activities practicing EVA maneuvers in water were performed at the Neutral Buoyancy Simulator (NBS) at the Marshall Spaceflight Center in Huntsville, Alabama. This facility opened in 1967 and was used for EVA training until the early Space Shuttle program days. Although several photographs show astronauts performing EVA training in the NBS, records detailing who performed the training and the frequency of training are unavailable. Paper training records were stored within the NBS after it was designated as a National Historic Landmark in 1985 and closed in 1997, but significant resources would be needed to identify and secure these records, and at this time LSAH has not pursued acquisition of these early training records. Training in the NBS decreased when the Johnson Space Center in Houston, Texas, opened the Weightless Environment Training Facility (WETF) in 1980. Early training records from the WETF consist of 11 hand-written dive logbooks compiled by individual workers that were digitized at the request of LSAH. The WETF was integral in the training for Space Shuttle EVAs until its closure in 1998. The Neutral Buoyancy Laboratory (NBL) at the Sonny Carter Training Facility near JSC

  1. Regulatory and technical reports (abstract index journal). Compilation for third quarter 1997, July--September

    SciTech Connect

    Stevenson, L.L.

    1998-01-01

    This compilation consists of bibliographic data and abstracts for the formal regulatory and technical reports issued by the US Nuclear Regulatory Commission (NRC) Staff and its contractors. It is NRC`s intention to publish this compilation quarterly and to cumulate it annually. This report contains the third quarter 1997 abstracts.

  2. 21 CFR 20.64 - Records or information compiled for law enforcement purposes.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Records or information compiled for law enforcement purposes. 20.64 Section 20.64 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PUBLIC INFORMATION Exemptions § 20.64 Records or information compiled for law enforcement purposes. (a) Records...

  3. 21 CFR 20.64 - Records or information compiled for law enforcement purposes.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Records or information compiled for law enforcement purposes. 20.64 Section 20.64 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... furnished by a confidential source in the case of a record compiled by the Food and Drug Administration...

  4. Compilation of K-12 Action Research Papers in Language Arts Education.

    ERIC Educational Resources Information Center

    Sherman, Thomas F.; Lundquist, Margaret

    The papers in this compilation are the result of K-12 action research projects and were submitted in partial fulfillment for a variety of degrees from Winona State University (Minnesota). The compilation contains the following nine papers: "Will Playing Background Music in My Classroom Help Increase Student Spelling Scores?" (Jonathan L. Wright);…

  5. A Compilation of Information on Computer Applications in Nutrition and Food Service.

    ERIC Educational Resources Information Center

    Casbergue, John P.

    Compiled is information on the application of computer technology to nutrition food service. It is designed to assist dieticians and nutritionists interested in applying electronic data processing to food service and related industries. The compilation is indexed by subject area. Included for each subject area are: (1) bibliographic references,…

  6. Regulatory and technical reports: (Abstract index journal). Compilation for first quarter 1997, January--March

    SciTech Connect

    Sheehan, M.A.

    1997-06-01

    This compilation consists of bibliographic data and abstracts for the formal regulatory and technical reports issued by the U.S. Nuclear Regulatory Commission (NRC) Staff and its contractors. This compilation is published quarterly and cummulated annually. Reports consist of staff-originated reports, NRC-sponsored conference reports, NRC contractor-prepared reports, and international agreement reports.

  7. 21 CFR 20.64 - Records or information compiled for law enforcement purposes.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Records or information compiled for law enforcement purposes. 20.64 Section 20.64 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... furnished by a confidential source in the case of a record compiled by the Food and Drug Administration...

  8. 21 CFR 20.64 - Records or information compiled for law enforcement purposes.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Records or information compiled for law enforcement purposes. 20.64 Section 20.64 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH... furnished by a confidential source in the case of a record compiled by the Food and Drug Administration...

  9. 12 CFR 503.2 - Exemptions of records containing investigatory material compiled for law enforcement purposes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... with enforcing criminal or civil laws. (d) Documents exempted. Exemptions will be applied only when... material compiled for law enforcement purposes. 503.2 Section 503.2 Banks and Banking OFFICE OF THRIFT... material compiled for law enforcement purposes. (a) Scope. The Office has established a system of...

  10. On the performance of the HAL/S-FC compiler. [for space shuttles

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1975-01-01

    The HAL/S compilers which will be used in the space shuttles are described. Acceptance test objectives and procedures are described, the raw results are presented and analyzed, and conclusions and observations are drawn. An appendix is included containing an illustrative set of compiler listings and results for one of the test cases.

  11. A Compilation of Laws Pertaining to Indians. State of Maine, July 1976.

    ERIC Educational Resources Information Center

    Maine State Dept. of Indian Affairs, Augusta.

    Compiled from the Maine Revised Statutes of 1964, the Constitution of Maine, and the current Resolves and Private and Special Laws, this document constitutes an update to a previous publication (January 1974), correcting errors and adding amendments through 1976. This compilation of laws pertaining to American Indians includes statutes on the…

  12. 26 CFR 301.7515-1 - Special statistical studies and compilations on request.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Special statistical studies and compilations on... Actions by the United States § 301.7515-1 Special statistical studies and compilations on request. The... of the cost of the work to be performed, to make special statistical studies and...

  13. Methods for the Compilation of a Core List of Journals in Toxicology.

    ERIC Educational Resources Information Center

    Kuch, T. D. C.

    Previously reported methods for the compilation of core lists of journals in multidisciplinary areas are first examined, with toxicology used as an example of such an area. Three approaches to the compilation of a core list of journals in toxicology were undertaken and the results analyzed with the aid of models. Analysis of the results of the…

  14. 26 CFR 301.7515-1 - Special statistical studies and compilations on request.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 18 2014-04-01 2014-04-01 false Special statistical studies and compilations on... Actions by the United States § 301.7515-1 Special statistical studies and compilations on request. The... of the cost of the work to be performed, to make special statistical studies and...

  15. 26 CFR 301.7515-1 - Special statistical studies and compilations on request.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 18 2011-04-01 2011-04-01 false Special statistical studies and compilations on... Actions by the United States § 301.7515-1 Special statistical studies and compilations on request. The... of the cost of the work to be performed, to make special statistical studies and...

  16. 26 CFR 301.7515-1 - Special statistical studies and compilations on request.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 18 2013-04-01 2013-04-01 false Special statistical studies and compilations on... Actions by the United States § 301.7515-1 Special statistical studies and compilations on request. The... of the cost of the work to be performed, to make special statistical studies and...

  17. 26 CFR 301.7515-1 - Special statistical studies and compilations on request.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 18 2012-04-01 2012-04-01 false Special statistical studies and compilations on... Actions by the United States § 301.7515-1 Special statistical studies and compilations on request. The... of the cost of the work to be performed, to make special statistical studies and...

  18. Compiling knowledge-based systems specified in KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Feldman, Roy D.

    1991-01-01

    The first year of the PrKAda project is recounted. The primary goal was to develop a system for delivering Artificial Intelligence applications developed in the ProKappa system in a pure-Ada environment. The following areas are discussed: the ProKappa core and ProTalk programming language; the current status of the implementation; the limitations and restrictions of the current system; and the development of Ada-language message handlers in the ProKappa environment.

  19. An efficient data dependence analysis for parallelizing compilers

    NASA Technical Reports Server (NTRS)

    Li, Zhiyuan; Yew, Pen-Chung; Zhu, Chuan-Qi

    1990-01-01

    A novel algorithm, called the lambda test, is presented for an efficient and accurate data dependence analysis of multidimensional array references. It extends the numerical methods to allow all dimensions of array references to be tested simultaneously. Hence, it combines the efficiency and the accuracy of the both approaches. This algorithm has been implemented in PARAFRASE, a FORTRAN program parallelization restructurer developed at the University of Illinois at Urbana-Champaign. Some experimental results are presented to show its effectiveness.

  20. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    PubMed

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization.

  1. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    PubMed

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization. PMID:25895722

  2. FTC - THE FAULT-TREE COMPILER (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS

  3. FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS

  4. Proceedings of the workshop on Compilation of (Symbolic) Languages for Parallel Computers

    SciTech Connect

    Foster, I.; Tick, E.

    1991-11-01

    This report comprises the abstracts and papers for the talks presented at the Workshop on Compilation of (Symbolic) Languages for Parallel Computers, held October 31--November 1, 1991, in San Diego. These unreferred contributions were provided by the participants for the purpose of this workshop; many of them will be published elsewhere in peer-reviewed conferences and publications. Our goal is planning this workshop was to bring together researchers from different disciplines with common problems in compilation. In particular, we wished to encourage interaction between researchers working in compilation of symbolic languages and those working on compilation of conventional, imperative languages. The fundamental problems facing researchers interested in compilation of logic, functional, and procedural programming languages for parallel computers are essentially the same. However, differences in the basic programming paradigms have led to different communities emphasizing different species of the parallel compilation problem. For example, parallel logic and functional languages provide dataflow-like formalisms in which control dependencies are unimportant. Hence, a major focus of research in compilation has been on techniques that try to infer when sequential control flow can safely be imposed. Granularity analysis for scheduling is a related problem. The single- assignment property leads to a need for analysis of memory use in order to detect opportunities for reuse. Much of the work in each of these areas relies on the use of abstract interpretation techniques.

  5. Implementation Science

    PubMed Central

    Demiris, George

    2014-01-01

    This article provides a general introduction to implementation science—the discipline that studies the implementation process of research evidence—in the context of hospice and palliative care. By discussing how implementation science principles and frameworks can inform the design and implementation of intervention research, we aim to highlight how this approach can maximize the likelihood for translation and long-term adoption in clinical practice settings. We present 2 ongoing clinical trials in hospice that incorporate considerations for translation in their design and implementation as case studies for the implications of implementation science. This domain helps us better understand why established programs may lose their effectiveness over time or when transferred to other settings, why well-tested programs may exhibit unintended effects when introduced in new settings, or how an intervention can maximize cost-effectiveness with strategies for effective adoption. All these challenges are of significance to hospice and palliative care, where we seek to provide effective and efficient tools to improve care services. The emergence of this discipline calls for researchers and practitioners to carefully examine how to refine current and design new and innovative strategies to improve quality of care. PMID:23558847

  6. Regulatory and technical reports (abstract index journal): Annual compilation for 1994. Volume 19, Number 4

    SciTech Connect

    1995-03-01

    This compilation consists of bibliographic data and abstracts for the formal regulatory and technical reports issued by the US Nuclear Regulatory Commission (NRC) Staff and its contractors. It is NRC`s intention to publish this compilation quarterly and to cumulate it annually. The main citations and abstracts in this compilation are listed in NUREG number order. These precede the following indexes: secondary report number index, personal author index, subject index, NRC originating organization index (staff reports), NRC originating organization index (international agreements), NRC contract sponsor index (contractor reports), contractor index, international organization index, and licensed facility index. A detailed explanation of the entries precedes each index.

  7. Compilation of Earthquakes from 1850-2007 within 200 miles of the Idaho National Laboratory

    SciTech Connect

    N. Seth Carpenter

    2010-07-01

    An updated earthquake compilation was created for the years 1850 through 2007 within 200 miles of the Idaho National Laboratory. To generate this compilation, earthquake catalogs were collected from several contributing sources and searched for redundant events using the search criteria established for this effort. For all sets of duplicate events, a preferred event was selected, largely based on epicenter-network proximity. All unique magnitude information for each event was added to the preferred event records and these records were used to create the compilation referred to as “INL1850-2007”.

  8. Ada compiler evaluation on the Space Station Freedom Software Support Environment project

    NASA Technical Reports Server (NTRS)

    Badal, D. L.

    1989-01-01

    This paper describes the work in progress to select the Ada compilers for the Space Station Freedom Program (SSFP) Software Support Environment (SSE) project. The purpose of the SSE Ada compiler evaluation team is to establish the criteria, test suites, and benchmarks to be used for evaluating Ada compilers for the mainframes, workstations, and the realtime target for flight- and ground-based computers. The combined efforts and cooperation of the customer, subcontractors, vendors, academia and SIGAda groups made it possible to acquire the necessary background information, benchmarks, test suites, and criteria used.

  9. Comparison of Classical and Lazy Approach in SCG Compiler

    NASA Astrophysics Data System (ADS)

    Jirák, Ota; Kolář, Dušan

    2011-09-01

    The existing parsing methods of scattered context grammar usually expand nonterminals deeply in the pushdown. This expansion is implemented by using either a linked list, or some kind of an auxiliary pushdown. This paper describes the parsing algorithm of an LL(1) scattered context grammar. The given algorithm merges two principles together. The first approach is a table-driven parsing method commonly used for parsing of the context-free grammars. The second is a delayed execution used in functional programming. The main part of this paper is a proof of equivalence between the common principle (the whole rule is applied at once) and our approach (execution of the rules is delayed). Therefore, this approach works with the pushdown top only. In the most cases, the second approach is faster than the first one. Finally, the future work is discussed.

  10. ARDEN2BYTECODE: a one-pass Arden Syntax compiler for service-oriented decision support systems based on the OSGi platform.

    PubMed

    Gietzelt, Matthias; Goltz, Ursula; Grunwald, Daniel; Lochau, Malte; Marschollek, Michael; Song, Bianying; Wolf, Klaus-Hendrik

    2012-05-01

    Patient empowerment might be one key to reduce the pressure on health care systems challenged by the expected demographic changes. Knowledge based systems can, in combination with automated sensor measurements, improve the patients' ability to review their state of health and make informed decisions. The Arden Syntax as a standardized language to represent medical knowledge can be used to express the corresponding decision rules. In this paper we introduce ARDEN2BYTECODE, a newly developed open source compiler for the Arden Syntax. ARDEN2BYTECODE runs on Java Virtual Machines (JVM) and translates Arden Syntax directly to Java Bytecode (JBC) executable on JVMs. ARDEN2BYTECODE easily integrates into service oriented architectures, like the Open Services Gateway Initiative (OSGi) platform. Apart from an evaluation of compilation performance and execution times, ARDEN2BYTECODE was integrated into an existing knowledge supported exercise training system and recorded training sessions have been used to check the implementation.

  11. Domain Compilation for Embedded Real-Time Planning

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2005-01-01

    A recently conceived approach to automated real-time control of the actions of a robotic system enables an embedded real-time planning algorithm to develop plans that are more robust than they would otherwise be, without imposing an excessive computational burden. This approach occupies a middle ground between two prior approaches known in the art as the universal-plan and hybrid approaches. Ever since discovering the performance limitations of taking a sense-plan-act approach to controlling robots, the robotics community has endeavored to follow a behavior-based approach in which a behavior includes a rapid feedback loop between state estimation and motor control. Heretofore, system architectures following this approach have been based, variously, on algorithms that implement universal plans or algorithms that function as hybrids of planners and executives. In a typical universal-plan case, a set of behaviors is merged into the plan, but the system must be restricted to relatively small problem domains to avoid having to reason about too many states and represent them in the plan. In the hybrid approach, one implements actions as small sets of behaviors, each applicable to a limited set of circumstances. Each action is intended to bring the system to a subgoal state. A planning algorithm is used to string these actions together into a sequence to traverse the state space from an initial or current state to a goal state. The hybrid approach works well in a static environment, but it is inherently brittle in a dynamic environment because a failure can occur when the environment strays beyond the region of applicability of the current activity. In the present approach, a system can vary from the hybrid approach to the universal-plan approach, depending on a single integer parameter, denoted n, which can range from 1 to a maximum domain-dependent value of M. As illustrated in the figure, n = 1 represents the hybrid approach, in which each linked action covers a small

  12. Optimizing qubit phase estimation

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François

    2016-08-01

    The theory of quantum state estimation is exploited here to investigate the most efficient strategies for this task, especially targeting a complete picture identifying optimal conditions in terms of Fisher information, quantum measurement, and associated estimator. The approach is specified to estimation of the phase of a qubit in a rotation around an arbitrary given axis, equivalent to estimating the phase of an arbitrary single-qubit quantum gate, both in noise-free and then in noisy conditions. In noise-free conditions, we establish the possibility of defining an optimal quantum probe, optimal quantum measurement, and optimal estimator together capable of achieving the ultimate best performance uniformly for any unknown phase. With arbitrary quantum noise, we show that in general the optimal solutions are phase dependent and require adaptive techniques for practical implementation. However, for the important case of the depolarizing noise, we again establish the possibility of a quantum probe, quantum measurement, and estimator uniformly optimal for any unknown phase. In this way, for qubit phase estimation, without and then with quantum noise, we characterize the phase-independent optimal solutions when they generally exist, and also identify the complementary conditions where the optimal solutions are phase dependent and only adaptively implementable.

  13. Water-quality standards criteria summaries: a compilation of State/Federal criteria: turbidity

    SciTech Connect

    Not Available

    1988-01-01

    This report contains excerpts from the individual State water-quality standards establishing pollutant-specific criteria for interstate surface waters. Turbidity in state water-quality standards is the subject of the compilation.

  14. Regulatory and technical reports (abstract index journal): Annual compilation for 1996, Volume 21, No. 4

    SciTech Connect

    Sheehan, M.A.

    1997-04-01

    This compilation is the annual cumulation of bibliographic data and abstracts for the formal regulatory and technical reports issued by the U.S. Nuclear Regulatory Commission (NRC) Staff and its contractors.

  15. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...” which is furnished free of charge to licensees under this part. Where the compilation has previously... in this part. It is ATF Publication 5300.5, revised yearly. The current edition is available from...

  16. 27 CFR 478.24 - Compilation of State laws and published ordinances.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...” which is furnished free of charge to licensees under this part. Where the compilation has previously... in this part. It is ATF Publication 5300.5, revised yearly. The current edition is available from...

  17. 78 FR 63270 - Request for Public Comments To Compile the Report on Sanitary and Phytosanitary Measures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-23

    ... Phytosanitary Measures. With this notice, the Trade Policy Staff Committee (TPSC) is requesting interested... Chair, Trade Policy Staff Committee. BILLING CODE 3290-F3-P ... TRADE REPRESENTATIVE Request for Public Comments To Compile the Report on Sanitary and...

  18. The application of compiler-assisted multiple instruction retry to VLIW architectures

    NASA Technical Reports Server (NTRS)

    Chen, Shyh-Kwei; Fuchs, W. K.; Hwu, Wen-Mei W.

    1994-01-01

    Very Long Instruction Word (VLIW) architectures enhance performance by exploiting fine-grained instruction level parallelism. We describe the development of two compiler assisted multiple instruction word retry schemes for VLIW architectures. The first scheme utilizes the compiler techniques previously developed for processors with single functional units. A compiler generated hazard-free code with different degrees of rollback capability for uniprocessors is compacted by a modified VLIW trace scheduling algorithm. Nops are then inserted in the scheduled code words to resolve data hazards for VLIW architectures. Performance is compared under three parameters: the rollback distance for uni-processors; the number of functional units; and the rollback distance for VLIW architectures. The second scheme employs a hardware read buffer to resolve frequently occurring data hazards, and utilizes the compiler to resolve the remaining hazards. Performance results are shown for six benchmark programs.

  19. In defense of compilation: A response to Davis' form and content in model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard

    1990-01-01

    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.

  20. ROSETTA: the compile-time recognition of object-oriented library abstractions and their use within user applications

    SciTech Connect

    Quinlan, D; Philip, B

    2001-01-08

    Libraries arise naturally from the increasing complexity of developing scientific applications, the optimization of libraries is just one type of high-performance optimization. Many complex applications areas can today be addressed by domain-specific object-oriented frameworks. Such object-oriented frameworks provide an effective compliment to an object-oriented language and effectively permit the design of what amount to essentially domain-specific languages. The optimization of such a domain-specific library/language combination however is particularly complicated due to the inability of the compiler to optimize the use of the libraries abstractions. The recognition of the use of object-oriented abstractions within user applications is a particularly difficult but important step in the optimization of how objects are used within expressions and statements. Such recognition entails more than just complex pattern matching. The approach presented within this paper uses specially built grammars to parse the C++ representation. The C++ representation is itself obtained using a modified version of the SAGE II C/C++ source code restructuring tool which is inturn based upon the Edison Design Group (EDG) C++ front-end. ROSETTA is a tool which automatically builds grammars and parsers from class definitions, associated parsers parse abstract syntax trees (ASTs) of lower level grammars into ASTs of higher level grammars. The lowest level grammar is that associated with the full C++ language itself, higher level grammars specialize the grammars specific to user defined objects. The grammars form a hierarchy and permit a high-degree of specialization in the recognition of complex use of user defined abstractions.