Science.gov

Sample records for implementation compilation optimization

  1. A systolic array optimizing compiler

    SciTech Connect

    Lam, M.S. )

    1988-01-01

    This book documents the research and results of the compiler technology developed for the Warp machine. A major challenge in the development of Warp was to build an optimizing compiler for the machine. This book describes a compiler that shields most of the difficulty from the user and generates very efficient code. Several new optimizations are described and evaluated. The research described confirms that compilers play a valuable role in the development, usage and effectiveness of novel high-performance architectures.

  2. Design and Implementation of Code Optimizations for a Type-Directed Compiler for Standard ML.

    DTIC Science & Technology

    1996-12-01

    how programs are written, the reliability of programs , and the scale of programs that can be written. A poorly- designed programming language, such as...never quite work right. A well- designed programming language, such as SML, on the other hand makes it a joy to program . For too long, most of the world...Orlando, FL, January 1991. ACM. [2] Proceedings of the ACM SIGPLAN 󈨡 Conference on Programming Language Design and Implementation, Albuquerque, New

  3. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  4. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  5. Design and implementation of a quantum compiler

    NASA Astrophysics Data System (ADS)

    Metodi, Tzvetan S.; Gasster, Samuel D.

    2010-04-01

    We present a compiler for programming quantum architectures based on the Quantum Random Access Machine (QRAM) model. The QRAM model consists of a classical subsystem responsible for generating the quantum operations that are executed on a quantum subsystem. The compiler can also be applied to trade studies for optimizing the reliability and latency of quantum programs and to determine the required error correction resources. We use the Bacon-Shor [9, 1, 3] quantum error correcting code as an example quantum program that can be processed and analyzed by the compiler.

  6. A Language for Specifying Compiler Optimizations for Generic Software

    SciTech Connect

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allow the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.

  7. Pattern-Based Languages for Prototyping of Compiler Optimizers

    DTIC Science & Technology

    1990-12-01

    the program and that produce code at least as ecient as the non-optimizing compiler. Optimizer design | choosing a good set of transformations to...Scheme," SIGPLAN 󈨜 Conference on Program - ming Language Design and Implementation 23 (June 22-24, 1988), 164{174. [Ste76] Guy Lewis Steele Jr...Kuiper, \\Higher Order Attribute Grammars," Proceedings of the ACM-SIGPLAN 󈨝 Conference on Programming Language Design and Implementation 24 (June

  8. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  9. Implementation of a Compiler for the Functional Programming Language PHI.

    DTIC Science & Technology

    1987-06-01

    authors think this should facilitate the understanding of both concept and implementation. The front - end of the compiler implements machine independent...CONSTRAINTS ....................................................................... 12 I. FRONT - END OF THE COMPILER...PHI compiler is shown in Figure 1.1. The front - end , containing the scanner (lexical analyzer) and parser (syntactic analyzer) is essentially responsible

  10. Resource efficient gadgets for compiling adiabatic quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan; O'Gorman, Bryan; Aspuru-Guzik, Alán

    2013-11-01

    We develop a resource efficient method by which the ground-state of an arbitrary k-local, optimization Hamiltonian can be encoded as the ground-state of a (k-1)-local optimization Hamiltonian. This result is important because adiabatic quantum algorithms are often most easily formulated using many-body interactions but experimentally available interactions are generally 2-body. In this context, the efficiency of a reduction gadget is measured by the number of ancilla qubits required as well as the amount of control precision needed to implement the resulting Hamiltonian. First, we optimize methods of applying these gadgets to obtain 2-local Hamiltonians using the least possible number of ancilla qubits. Next, we show a novel reduction gadget which minimizes control precision and a heuristic which uses this gadget to compile 3-local problems with a significant reduction in control precision. Finally, we present numerics which indicate a substantial decrease in the resources required to implement randomly generated, 3-body optimization Hamiltonians when compared to other methods in the literature.

  11. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  12. Local Code Generation and Compaction in Optimizing Microcode Compilers

    DTIC Science & Technology

    1982-12-01

    Jeannie for much-needed love and emotional support. 4. Iv -t - *0 0, z Si 6 I- qV Table of Contents 1. Introduction 1.1. Horizontal Microcode 1 1.2...Research in compiler optimization suggests that a large number of register classes tends to make register allocation more difficult [ Kim 79, Leverett...when allocating registers for a micromachine. The microcode register allocation schemes designed by Kim and Tan [ Kim 79] and DeWitt [DeWitt 76] are

  13. Implementing optimal thinning strategies

    Treesearch

    Kurt H. Riitters; J. Douglas Brodie

    1984-01-01

    Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....

  14. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project.

    PubMed

    Powell, Byron J; Waltz, Thomas J; Chinman, Matthew J; Damschroder, Laura J; Smith, Jeffrey L; Matthieu, Monica M; Proctor, Enola K; Kirchner, JoAnn E

    2015-02-12

    Identifying, developing, and testing implementation strategies are important goals of implementation science. However, these efforts have been complicated by the use of inconsistent language and inadequate descriptions of implementation strategies in the literature. The Expert Recommendations for Implementing Change (ERIC) study aimed to refine a published compilation of implementation strategy terms and definitions by systematically gathering input from a wide range of stakeholders with expertise in implementation science and clinical practice. Purposive sampling was used to recruit a panel of experts in implementation and clinical practice who engaged in three rounds of a modified Delphi process to generate consensus on implementation strategies and definitions. The first and second rounds involved Web-based surveys soliciting comments on implementation strategy terms and definitions. After each round, iterative refinements were made based upon participant feedback. The third round involved a live polling and consensus process via a Web-based platform and conference call. Participants identified substantial concerns with 31% of the terms and/or definitions and suggested five additional strategies. Seventy-five percent of definitions from the originally published compilation of strategies were retained after voting. Ultimately, the expert panel reached consensus on a final compilation of 73 implementation strategies. This research advances the field by improving the conceptual clarity, relevance, and comprehensiveness of implementation strategies that can be used in isolation or combination in implementation research and practice. Future phases of ERIC will focus on developing conceptually distinct categories of strategies as well as ratings for each strategy's importance and feasibility. Next, the expert panel will recommend multifaceted strategies for hypothetical yet real-world scenarios that vary by sites' endorsement of evidence-based programs and practices

  15. Final Project Report: A Polyhedral Transformation Framework for Compiler Optimization

    SciTech Connect

    Sadayappan, Ponnuswamy; Rountev, Atanas

    2015-06-15

    The project developed the polyhedral compiler transformation module PolyOpt/Fortran in the ROSE compiler framework. PolyOpt/Fortran performs automated transformation of affine loop nests within FORTRAN programs for enhanced data locality and parallel execution. A FORTAN version of the Polybench library was also developed by the project. A third development was a dynamic analysis approach to gauge vectorization potential within loops of programs; software (DDVec) for automated instrumentation and dynamic analysis of programs was developed.

  16. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  17. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  18. TIL: A Type-Directed Optimizing Compiler for ML.

    DTIC Science & Technology

    1996-02-29

    Conference on Program - ming Language Design and Implementation, Philadelphia, Pennsylvania, May 21-24, 1996. It is also published as Fox Memorandum...Conference on Programming Language Design and Implementation, pages 85-94, Atlanta, Georgia, June 1988. ACM. [14] A. Demers, M. Weiser, B. Hayes, H...on Programming Language Design and Implementation, pages 273-282, San Francisco, CA, June 1992. ACM. [16] Amer Diwan, David Tarditi, and Eliot Moss

  19. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    SciTech Connect

    Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev

    2014-03-01

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.

  20. Eliminating Scope and Selection Restrictions in Compiler Optimization

    DTIC Science & Technology

    2006-09-01

    C o d e S a m p le s O pt i l ev el (R ) O pt i l ev el (W ) If- co nv . (R ) If- co nv . (W ) Ld - S t (R ) Ld - S ...exploration performance” of each such subset is determined, as follows: Let R( s , c ) be the runtime of a code sample s when optimized using an...optimization configuration c . Then the exploration value of a set of configurations C on a set of code samples S is given by the

  1. Optimization guide for programs compiled under IBM FORTRAN H (OPT=2)

    NASA Technical Reports Server (NTRS)

    Smith, D. M.; Dobyns, A. H.; Marsh, H. M.

    1977-01-01

    Guidelines are given to provide the programmer with various techniques for optimizing programs when the FORTRAN IV H compiler is used with OPT=2. Subroutines and programs are described in the appendices along with a timing summary of all the examples given in the manual.

  2. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  3. Multiprocessors and runtime compilation

    NASA Technical Reports Server (NTRS)

    Saltz, Joel; Berryman, Harry; Wu, Janet

    1990-01-01

    Runtime preprocessing plays a major role in many efficient algorithms in computer science, as well as playing an important role in exploiting multiprocessor architectures. Examples are given that elucidate the importance of runtime preprocessing and show how these optimizations can be integrated into compilers. To support the arguments, transformations implemented in prototype multiprocessor compilers are described and benchmarks from the iPSC2/860, the CM-2, and the Encore Multimax/320 are presented.

  4. Kokkos GPU Compiler

    SciTech Connect

    Moss, Nicholas

    2016-07-15

    The Kokkos Clang compiler is a version of the Clang C++ compiler that has been modified to perform targeted code generation for Kokkos constructs in the goal of generating highly optimized code and to provide semantic (domain) awareness throughout the compilation toolchain of these constructs such as parallel for and parallel reduce. This approach is taken to explore the possibilities of exposing the developer’s intentions to the underlying compiler infrastructure (e.g. optimization and analysis passes within the middle stages of the compiler) instead of relying solely on the restricted capabilities of C++ template metaprogramming. To date our current activities have focused on correct GPU code generation and thus we have not yet focused on improving overall performance. The compiler is implemented by recognizing specific (syntactic) Kokkos constructs in order to bypass normal template expansion mechanisms and instead use the semantic knowledge of Kokkos to directly generate code in the compiler’s intermediate representation (IR); which is then translated into an NVIDIA-centric GPU program and supporting runtime calls. In addition, by capturing and maintaining the higher-level semantics of Kokkos directly within the lower levels of the compiler has the potential for significantly improving the ability of the compiler to communicate with the developer in the terms of their original programming model/semantics.

  5. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    SciTech Connect

    Choudhary, Alok; Kandemir, Mahmut

    2015-03-18

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizes the major achievements of the project and also points out promising future directions.

  6. Compiler optimizations as a countermeasure against side-channel analysis in MSP430-based devices.

    PubMed

    Malagón, Pedro; de Goyeneche, Juan-Mariano; Zapater, Marina; Moya, José M; Banković, Zorana

    2012-01-01

    Ambient Intelligence (AmI) requires devices everywhere, dynamic and massively distributed networks of low-cost nodes that, among other data, manage private information or control restricted operations. MSP430, a 16-bit microcontroller, is used in WSN platforms, as the TelosB. Physical access to devices cannot be restricted, so attackers consider them a target of their malicious attacks in order to obtain access to the network. Side-channel analysis (SCA) easily exploits leakages from the execution of encryption algorithms that are dependent on critical data to guess the key value. In this paper we present an evaluation framework that facilitates the analysis of the effects of compiler and backend optimizations on the resistance against statistical SCA. We propose an optimization-based software countermeasure that can be used in current low-cost devices to radically increase resistance against statistical SCA, analyzed with the new framework.

  7. Compiler Optimizations as a Countermeasure against Side-Channel Analysis in MSP430-Based Devices

    PubMed Central

    Malagón, Pedro; de Goyeneche, Juan-Mariano; Zapater, Marina; Moya, José M.; Banković, Zorana

    2012-01-01

    Ambient Intelligence (AmI) requires devices everywhere, dynamic and massively distributed networks of low-cost nodes that, among other data, manage private information or control restricted operations. MSP430, a 16-bit microcontroller, is used in WSN platforms, as the TelosB. Physical access to devices cannot be restricted, so attackers consider them a target of their malicious attacks in order to obtain access to the network. Side-channel analysis (SCA) easily exploits leakages from the execution of encryption algorithms that are dependent on critical data to guess the key value. In this paper we present an evaluation framework that facilitates the analysis of the effects of compiler and backend optimizations on the resistance against statistical SCA. We propose an optimization-based software countermeasure that can be used in current low-cost devices to radically increase resistance against statistical SCA, analyzed with the new framework. PMID:22969383

  8. Schedule optimization study implementation plan

    SciTech Connect

    Not Available

    1993-11-01

    This Implementation Plan is intended to provide a basis for improvements in the conduct of the Environmental Restoration (ER) Program at Hanford. The Plan is based on the findings of the Schedule Optimization Study (SOS) team which was convened for two weeks in September 1992 at the request of the U.S. Department of Energy (DOE) Richland Operations Office (RL). The need for the study arose out of a schedule dispute regarding the submission of the 1100-EM-1 Operable Unit (OU) Remedial Investigation/Feasibility Study (RI/FS) Work Plan. The SOS team was comprised of independent professionals from other federal agencies and the private sector experienced in environmental restoration within the federal system. The objective of the team was to examine reasons for the lengthy RI/FS process and recommend ways to expedite it. The SOS team issued their Final Report in December 1992. The report found the most serious impediments to cleanup relate to a series of management and policy issues which are within the control of the three parties managing and monitoring Hanford -- the DOE, U.S. Environmental Protection Agency (EPA), and the State of Washington Department of Ecology (Ecology). The SOS Report identified the following eight cross-cutting issues as the root of major impediments to the Hanford Site cleanup. Each of these eight issues is quoted from the SOS Report followed by a brief, general description of the proposed approach being developed.

  9. Implementation of the Altair optimization processes

    NASA Astrophysics Data System (ADS)

    Smith, Malcolm J.; Véran, Jean-Pierre

    2003-02-01

    Altair is the adaptive optics system developed by NRC Canada for the Gemini North Telescope. Altair uses modal control and a quad-cell based Shack-Hartmann wavefront sensor. In order for Altair to adapt to changes in the observing conditions, two optimizers are activated when the AO loop is closed. These optimizers are the modal gain optimizer (MGO) and the centroid gain optimizer (CGO). This paper discusses the implementation and timing results of these optimizers.

  10. OptQC v1.3: An (updated) optimized parallel quantum compiler

    NASA Astrophysics Data System (ADS)

    Loke, T.; Wang, J. B.

    2016-10-01

    We present a revised version of the OptQC program of Loke et al. (2014) [1]. We have removed the simulated annealing process in favour of a descending random walk. We have also introduced a new method for iteratively generating permutation matrices during the random walk process, providing a reduced total cost for implementing the quantum circuit. Lastly, we have also added a synchronization mechanism between threads, giving quicker convergence to more optimal solutions.

  11. Design and Implementation of a Basic Cross-Compiler and Virtual Memory Management System for the TI-59 Programmable Calculator.

    DTIC Science & Technology

    1983-06-01

    previously stated requirements to construct the framework for a software soluticn. It is during this phase of design that lany cf the most critical...the linker would have to be deferred until the compiler was formalized and ir the implementation phase of design. The second problem involved...memory liait was encountered. At this point a segmentation occurred. The memory limits were reset and the combining process continued until another

  12. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  13. A Data Model for Compiling Heterogeneous Bathymetric Soundings, and its Implementation in the North Atlantic

    NASA Astrophysics Data System (ADS)

    Hell, B.; Jakobsson, M.; Macnab, R.; Mayer, L. A.

    2006-12-01

    The North Atlantic is arguably the best mapped ocean in the world, with a huge quantity of inconsistent sounding data featuring a tremendous variability in accuracy, resolution and density. Therefore it is an ideal test area for data compilation techniques. For the compilation of a new Digital Bathymetric Model (DBM) of the North Atlantic, a combination of a GIS and a spatial database is used for data storage, verification and processing. A data model has been developed that can flexibly accommodate all kinds of raw and processed data, resulting in a data warehouse schema with metadata (describing data acquisition and quality) as separate data dimensions. Future work will involve data quality analysis based on metadata information and cross-survey checks, development of algorithms for merging and gridding heterogeneous sounding data, research on variable grids for bathymetric data, the treatment of error propagation through the gridding process and the production of a high-resolution (approx. 500 m) DBM accompanied by a confidence model. The proposed International Bathymetric Chart of the North Atlantic (IBCNA) is an undertaking to assemble and to rationalize all available bathymetric observations from the Atlantic Ocean and adjacent seas north of the Equator and south of 64°N into a consistent DBM. Neither of today's most commonly-used large scale models -- GEBCO (based upon digitized contours derived from single beam echo sounding measurements) and ETOPO2 (satellite altimetry combined with single beam echo soundings) -- incorporates the large amount of recent multibeam echo sounding data, and there is a need for a more up-to-date DBM. This could serve a broad variety of scientific and technical purposes such as geological investigations, future survey and field operation planning, oceanographic modeling, deep ocean tsunami propagation research, habitat mapping and biodiversity studies and evaluating the long-term effect of sea level change on coastal areas. In

  14. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    Corporation , Order No. GY28-6800-5 (December 1971). [IBM72] FORTRAN IV (H) Compiler Program Logic Manual, IBM Corporation , Order No. GH28-6642-5...RAVC ptans6 and excutes~ ’Leseatc, devetopment, .te,~t and 4etected acquisition p)LopLamn in -sappo~~t o6 Command, Con-tt Commnications ~ and

  15. Quantum control implemented as combinatorial optimization.

    PubMed

    Strohecker, Traci; Rabitz, Herschel

    2010-01-15

    Optimal control theory provides a general means for designing controls to manipulate quantum phenomena. Traditional implementation requires solving coupled nonlinear equations to obtain the optimal control solution, whereas this work introduces a combinatorial quantum control (CQC) algorithm to avoid this complexity. The CQC technique uses a predetermined toolkit of small time step propagators in conjunction with combinatorial optimization to identify a proper sequence for the toolkit members. Results indicate that the CQC technique exhibits invariance of search effort to the number of system states and very favorable scaling upon comparison to a standard gradient algorithm, taking into consideration that CQC is easily parallelizable.

  16. Read buffer optimizations to support compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.

    1993-01-01

    Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.

  17. Read buffer optimizations to support compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.

    1993-01-01

    Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.

  18. Clinical implementation of stereotaxic brain implant optimization.

    PubMed

    Rosenow, U F; Wojcicka, J B

    1991-01-01

    This optimization method for stereotaxic brain implants is based on seed/strand configurations of the basic type developed for the National Cancer Institute (NCI) atlas of regular brain implants. Irregular target volume shapes are determined from delineation in a stack of contrast enhanced computed tomography scans. The neurosurgeon may then select up to ten directions, or entry points, of surgical approach of which the program finds the optimal one under the criterion of smallest target volume diameter. Target volume cross sections are then reconstructed in 5-mm-spaced planes perpendicular to the implantation direction defined by the entry point and the target volume center. This information is used to define a closed line in an implant cross section along which peripheral seed strands are positioned and which has now an irregular shape. Optimization points are defined opposite peripheral seeds on the target volume surface to which the treatment dose rate is prescribed. Three different optimization algorithms are available: linear least-squares programming, quadratic programming with constraints, and a simplex method. The optimization routine is implemented into a commercial treatment planning system. It generates coordinate and source strength information of the optimized seed configurations for further dose rate distribution calculation with the treatment planning system, and also the coordinate settings for the stereotaxic Brown-Roberts-Wells (BRW) implantation device.

  19. Clinical implementation of stereotaxic brain implant optimization

    SciTech Connect

    Rosenow, U.F.; Wojcicka, J.B. )

    1991-03-01

    This optimization method for stereotaxic brain implants is based on seed/strand configurations of the basic type developed for the National Cancer Institute (NCI) atlas of regular brain implants. Irregular target volume shapes are determined from delineation in a stack of contrast enhanced computed tomography scans. The neurosurgeon may then select up to ten directions, or entry points, of surgical approach of which the program finds the optimal one under the criterion of smallest target volume diameter. Target volume cross sections are then reconstructed in 5-mm-spaced planes perpendicular to the implantation direction defined by the entry point and the target volume center. This information is used to define a closed line in an implant cross section along which peripheral seed strands are positioned and which has now an irregular shape. Optimization points are defined opposite peripheral seeds on the target volume surface to which the treatment dose rate is prescribed. Three different optimization algorithms are available: linear least-squares programming, quadratic programming with constraints, and a simplex method. The optimization routine is implemented into a commercial treatment planning system. It generates coordinate and source strength information of the optimized seed configurations for further dose rate distribution calculation with the treatment planning system, and also the coordinate settings for the stereotaxic Brown-Roberts-Wells (BRW) implantation device.

  20. Ada Compiler Validation Summary Report: Certificate Number: 910626S1. 11179 U.S. Navy Ada/M, Version 4.0 (OPTIMIZE) VAX 11/785 => AN/AYK-14 (Bare Board).

    DTIC Science & Technology

    1991-07-30

    Version 4.0 (/OPTIMIZE) VAX 11/785 => AN/AYK-14 (BP:.re Board) Prepared By: Software Standards Validation Group National Computer Systems Laboratory...Diretor, Coputer & Software Dr. John Solomond Engineering Division Director Institute for Defense Analyses Department of Defense Alexandria VA 22311... Software Validation Group Building 225, Room A266 Gaithersburg, Maryland 20899 ACVC Version: 1.1i Ada Implementation: Compiler Name and Version: Ada/M

  1. Compiler blockability of dense matrix factorizations.

    SciTech Connect

    Carr, S.; Lehoucq, R. B.; Mathematics and Computer Science; Michigan Technological Univ.

    1997-09-01

    The goal of the LAPACK project is to provide efficient and portable software for dense numerical linear algebra computations. By recasting many of the fundamental dense matrix computations in terms of calls to an efficient implementation of the BLAS (Basic Linear Algebra Subprograms), the LAPACK project has, in large part, achieved its goal. Unfortunately, the efficient implementation of the BLAS results often in machine-specific code that is not portable across multiple architectures without a significant loss in performance or a significant effort to reoptimize them. This article examines whether most of the hand optimizations performed on matrix factorization codes are unnecessary because they can (and should) be performed by the compiler. We believe that it is better for the programmer to express algorithms in a machine-independent form and allow the compiler to handle the machine-dependent details. This gives the algorithms portability across architectures and removes the error-prone, expensive and tedious process of hand optimization. Although there currently exist no production compilers that can perform all the loop transformations discussed in this article, a description of current research in compiler technology is provided that will prove beneficial to the numerical linear algebra community. We show that the Cholesky and optimized automatically by a compiler to be as efficient as the same hand-optimized version found in LAPACK. We also show that the QR factorization may be optimized by the compiler to perform comparably with the hand-optimized LAPACK version on modest matrix sizes. Our approach allows us to conclude that with the advent of the compiler optimizations discussed in this article, matrix factorizations may be efficiently implemented in a BLAS-less form.

  2. Implementing the optimal provision of ecosystem services

    PubMed Central

    Polasky, Stephen; Lewis, David J.; Plantinga, Andrew J.; Nelson, Erik

    2014-01-01

    Many ecosystem services are public goods whose provision depends on the spatial pattern of land use. The pattern of land use is often determined by the decisions of multiple private landowners. Increasing the provision of ecosystem services, though beneficial for society as a whole, may be costly to private landowners. A regulator interested in providing incentives to landowners for increased provision of ecosystem services often lacks complete information on landowners’ costs. The combination of spatially dependent benefits and asymmetric cost information means that the optimal provision of ecosystem services cannot be achieved using standard regulatory or payment for ecosystem services approaches. Here we show that an auction that sets payments between landowners and the regulator for the increased value of ecosystem services with conservation provides incentives for landowners to truthfully reveal cost information, and allows the regulator to implement the optimal provision of ecosystem services, even in the case with spatially dependent benefits and asymmetric information. PMID:24722635

  3. Implementing the optimal provision of ecosystem services.

    PubMed

    Polasky, Stephen; Lewis, David J; Plantinga, Andrew J; Nelson, Erik

    2014-04-29

    Many ecosystem services are public goods whose provision depends on the spatial pattern of land use. The pattern of land use is often determined by the decisions of multiple private landowners. Increasing the provision of ecosystem services, though beneficial for society as a whole, may be costly to private landowners. A regulator interested in providing incentives to landowners for increased provision of ecosystem services often lacks complete information on landowners' costs. The combination of spatially dependent benefits and asymmetric cost information means that the optimal provision of ecosystem services cannot be achieved using standard regulatory or payment for ecosystem services approaches. Here we show that an auction that sets payments between landowners and the regulator for the increased value of ecosystem services with conservation provides incentives for landowners to truthfully reveal cost information, and allows the regulator to implement the optimal provision of ecosystem services, even in the case with spatially dependent benefits and asymmetric information.

  4. HOPE: Just-in-time Python compiler for astrophysical computations

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  5. User involvement in the implementation of clinical guidelines for common mental health disorders: a review and compilation of strategies and resources.

    PubMed

    Moreno, Eliana M; Moriana, Juan Antonio

    2016-08-09

    There is now broad consensus regarding the importance of involving users in the process of implementing guidelines. Few studies, however, have addressed this issue, let alone the implementation of guidelines for common mental health disorders. The aim of this study is to compile and describe implementation strategies and resources related to common clinical mental health disorders targeted at service users. The literature was reviewed and resources for the implementation of clinical guidelines were compiled using the PRISMA model. A mixed qualitative and quantitative analysis was performed based on a series of categories developed ad hoc. A total of 263 items were included in the preliminary analysis and 64 implementation resources aimed at users were analysed in depth. A wide variety of types, sources and formats were identified, including guides (40%), websites (29%), videos and leaflets, as well as instruments for the implementation of strategies regarding information and education (64%), self-care, or users' assessment of service quality. The results reveal the need to establish clear criteria for assessing the quality of implementation materials in general and standardising systems to classify user-targeted strategies. The compilation and description of key elements of strategies and resources for users can be of interest in designing materials and specific actions for this target audience, as well as improving the implementation of clinical guidelines.

  6. A Mathematical Approach for Compiling and Optimizing Hardware Implementations of DSP Transforms

    DTIC Science & Technology

    2010-08-01

    across multiple transforms datatypes, and design goals, and its results show that Spiral is able to automatically provide a wide tradeoff between cost (e.g...multiple trans- forms, datatypes, and design goals, and its results show that Spiral is able to automatically provide i ii a wide tradeoff between cost... wide range of algorithmic and datapath options and frees the designer from the difficult process of manually performing algorithmic and datapath

  7. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  8. An Advanced Compiler Designed for a VLIW DSP for Sensors-Based Systems

    PubMed Central

    Yang, Xu; He, Hu

    2012-01-01

    The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors. PMID:22666040

  9. An advanced compiler designed for a VLIW DSP for sensors-based systems.

    PubMed

    Yang, Xu; He, Hu

    2012-01-01

    The VLIW architecture can be exploited to greatly enhance instruction level parallelism, thus it can provide computation power and energy efficiency advantages, which satisfies the requirements of future sensor-based systems. However, as VLIW codes are mainly compiled statically, the performance of a VLIW processor is dominated by the behavior of its compiler. In this paper, we present an advanced compiler designed for a VLIW DSP named Magnolia, which will be used in sensor-based systems. This compiler is based on the Open64 compiler. We have implemented several advanced optimization techniques in the compiler, and fulfilled the O3 level optimization. Benchmarks from the DSPstone test suite are used to verify the compiler. Results show that the code generated by our compiler can make the performance of Magnolia match that of the current state-of-the-art DSP processors.

  10. Closed-Loop Optimal Control Implementations for Space Applications

    DTIC Science & Technology

    2016-12-01

    OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS by Colin S. Monk December 2016 Thesis Advisor: Mark Karpenko Second Reader: I. M...COVERED Master’s thesis, Jan-Dec 2016 4. TITLE AND SUBTITLE CLOSED-LOOP OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS 5. FUNDING NUMBERS...IMPLEMENTATIONS FOR SPACE APPLICATIONS Colin S. Monk Lieutenant Commander, United States Navy B.S., Tulane University, 2003 Submitted in partial

  11. Feedback Implementation of Zermelo's Optimal Control by Sugeno Approximation

    NASA Technical Reports Server (NTRS)

    Clifton, C.; Homaifax, A.; Bikdash, M.

    1997-01-01

    This paper proposes an approach to implement optimal control laws of nonlinear systems in real time. Our methodology does not require solving two-point boundary value problems online and may not require it off-line either. The optimal control law is learned using the original Sugeno controller (OSC) from a family of optimal trajectories. We compare the trajectories generated by the OSC and the trajectories yielded by the optimal feedback control law when applied to Zermelo's ship steering problem.

  12. Optimizing Cancer Care Delivery through Implementation Science

    PubMed Central

    Adesoye, Taiwo; Greenberg, Caprice C.; Neuman, Heather B.

    2016-01-01

    The 2013 Institute of Medicine report investigating cancer care concluded that the cancer care delivery system is in crisis due to an increased demand for care, increasing complexity of treatment, decreasing work force, and rising costs. Engaging patients and incorporating evidence-based care into routine clinical practice are essential components of a high-quality cancer delivery system. However, a gap currently exists between the identification of beneficial research findings and the application in clinical practice. Implementation research strives to address this gap. In this review, we discuss key components of high-quality implementation research. We then apply these concepts to a current cancer care delivery challenge in women’s health, specifically the implementation of a surgery decision aid for women newly diagnosed with breast cancer. PMID:26858933

  13. Optimal Implementations for Reliable Circadian Clocks

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-09-01

    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  14. Optimal controllers for finite wordlength implementation

    NASA Technical Reports Server (NTRS)

    Liu, K.; Skelton, R.

    1991-01-01

    When a controller is implemented in a digital computer, with A/D and D/A conversion, the numerical errors of the computation can drastically affect the performance of the control system. There exists realizations of a given controller transfer function yielding arbitrarily large effects from computational errors. Since, in general, there is no upper bound, it is important to have a systematic way of reducing these effects. Optimum controller designs are developed which take account of the digital round-off errors in the controller implementation and in the A/D and D/A converters. These results provide a natural extension to the Linear Quadratic Gaussian (LQG) theory since they reduce to the standard LQG controller when infinite precision computation is used. But for finite precision the separation principle does not hold.

  15. Certifying Compilation for Standard ML in a Type Analysis Framework

    DTIC Science & Technology

    2005-05-01

    speaking the certificate size seems to grow linearly with the program size. 9.8.5 Run time The LIL backend is not designed to be an optimizing backend. Code... designed to produce very efficient code. In order to do this, it compiles only complete programs . The Standard ML of NJ compiler is designed to be...Java. In Proceedings of the Conference on Programming Language Design and Implementation (PLDI’00), pages 95–107, Vancouver, Canada, June 2000. ACM

  16. Financing and funding health care: Optimal policy and political implementability.

    PubMed

    Nuscheler, Robert; Roeder, Kerstin

    2015-07-01

    Health care financing and funding are usually analyzed in isolation. This paper combines the corresponding strands of the literature and thereby advances our understanding of the important interaction between them. We investigate the impact of three modes of health care financing, namely, optimal income taxation, proportional income taxation, and insurance premiums, on optimal provider payment and on the political implementability of optimal policies under majority voting. Considering a standard multi-task agency framework we show that optimal health care policies will generally differ across financing regimes when the health authority has redistributive concerns. We show that health care financing also has a bearing on the political implementability of optimal health care policies. Our results demonstrate that an isolated analysis of (optimal) provider payment rests on very strong assumptions regarding both the financing of health care and the redistributive preferences of the health authority.

  17. Parallel optimization algorithms and their implementation in VLSI design

    NASA Technical Reports Server (NTRS)

    Lee, G.; Feeley, J. J.

    1991-01-01

    Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.

  18. A systolic array parallelizing compiler

    SciTech Connect

    Tseng, P.S. )

    1990-01-01

    This book presents a completely new approach to the problem of systolic array parallelizing compiler. It describes the AL parallelizing compiler for the Warp systolic array, the first working systolic array parallelizing compiler which can generate efficient parallel code for complete LINPACK routines. This book begins by analyzing the architectural strength of the Warp systolic array. It proposes a model for mapping programs onto the machine and introduces the notion of data relations for optimizing the program mapping. Also presented are successful applications of the AL compiler in matrix computation and image processing. A complete listing of the source program and compiler-generated parallel code are given to clarify the overall picture of the compiler. The book concludes that systolic array parallelizing compiler can produce efficient parallel code, almost identical to what the user would have written by hand.

  19. Compiling with Types

    DTIC Science & Technology

    1995-12-01

    Space-ecient conservative garbage collection. In ACM SIGPLAN Con- ference on Programming Language Design and Implementation, pages 197{206, Albu...language. In ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 273{282, San Francisco, June 1992. [36] A. Diwan, D...42] C. Flanagan, A. Sabry, B. F. Duba, and M. Felleisen . The essence of compiling with continuations. In ACM SIGPLAN Conference on Programming

  20. Algorithmic synthesis using Python compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Romaniuk, Ryszard; Pozniak, Krzysztof; Linczuk, Maciej

    2015-09-01

    This paper presents a python to VHDL compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and translate it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the programmed circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. This can be achieved by using many computational resources at the same time. Creating parallel programs implemented in FPGAs in pure HDL is difficult and time consuming. Using higher level of abstraction and High-Level Synthesis compiler implementation time can be reduced. The compiler has been implemented using the Python language. This article describes design, implementation and results of created tools.

  1. The Specification of Source-to-source Transformations for the Compile-time Optimization of Parallel Object-oriented Scientific Applications

    SciTech Connect

    Quinlan, D; Kowarschik, M

    2001-06-05

    The performance of object-oriented applications in scientific computing often suffers from the inefficient use of high-level abstractions provided by underlying libraries. Since these library abstractions are not part of the programming language itself there is no compiler mechanism to respect their semantics and thus to perform appropriate optimizations, e.g., array semantics within object-oriented array class libraries which permit parallel optimizations inconceivable to the serial compiler. We have presented the ROSE infrastructure as a tool for automatically generating library-specific preprocessors. These preprocessors can perform sematics-based source-to-source transformations of the application in order to introduce high-level code optimizations. In this paper we outline the design of ROSE and focus on the discussion of various approaches for specifying and processing complex source code transformations. These techniques are supposed to be as easy and intuitive as possible for the ROSE users, i.e. for the designers of the library-specific preprocessors.

  2. HAL/S-FC compiler system specifications

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  3. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    PubMed

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  4. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research

    PubMed Central

    Duan, Naihua; Bhaumik, Dulal K.; Palinkas, Lawrence A.; Hoagwood, Kimberly

    2015-01-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research. PMID:25491200

  5. Optimization of an optically implemented on-board FDMA demultiplexer

    NASA Technical Reports Server (NTRS)

    Fargnoli, J.; Riddle, L.

    1991-01-01

    Performance of a 30 GHz frequency division multiple access (FDMA) uplink to a processing satellite is modelled for the case where the onboard demultiplexer is implemented optically. Included in the performance model are the effects of adjacent channel interference, intersymbol interference, and spurious signals associated with the optical implementation. Demultiplexer parameters are optimized to provide the minimum bit error probability at a given bandwidth efficiency when filtered QPSK modulation is employed.

  6. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  7. All-Optical Implementation of the Ant Colony Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-05-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems.

  8. Implementing size-optimal discrete neural networks require analog circuitry

    SciTech Connect

    Beiu, V.

    1998-12-01

    This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  9. Implementation and Optimization of Image Processing Algorithms on Embedded GPU

    NASA Astrophysics Data System (ADS)

    Singhal, Nitin; Yoo, Jin Woo; Choi, Ho Yeol; Park, In Kyu

    In this paper, we analyze the key factors underlying the implementation, evaluation, and optimization of image processing and computer vision algorithms on embedded GPU using OpenGL ES 2.0 shader model. First, we present the characteristics of the embedded GPU and its inherent advantage when compared to embedded CPU. Additionally, we propose techniques to achieve increased performance with optimized shader design. To show the effectiveness of the proposed techniques, we employ cartoon-style non-photorealistic rendering (NPR), speeded-up robust feature (SURF) detection, and stereo matching as our example algorithms. Performance is evaluated in terms of the execution time and speed-up achieved in comparison with the implementation on embedded CPU.

  10. Implementation of generalized optimality criteria in a multidisciplinary environment

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.; Venkayya, V. B.

    1989-01-01

    A generalized optimality criterion method consisting of a dual problem solver combined with a compound scaling algorithm was implemented in the multidisciplinary design tool, ASTROS. This method enables, for the first time in a production design tool, the determination of a minimum weight design using thousands of independent structural design variables while simultaneously considering constraints on response quantities in several disciplines. Even for moderately large examples, the computational efficiency is improved significantly relative to the conventional approach.

  11. Optimal control in NMR spectroscopy: numerical implementation in SIMPSON.

    PubMed

    Tosner, Zdenek; Vosegaard, Thomas; Kehlet, Cindie; Khaneja, Navin; Glaser, Steffen J; Nielsen, Niels Chr

    2009-04-01

    We present the implementation of optimal control into the open source simulation package SIMPSON for development and optimization of nuclear magnetic resonance experiments for a wide range of applications, including liquid- and solid-state NMR, magnetic resonance imaging, quantum computation, and combinations between NMR and other spectroscopies. Optimal control enables efficient optimization of NMR experiments in terms of amplitudes, phases, offsets etc. for hundreds-to-thousands of pulses to fully exploit the experimentally available high degree of freedom in pulse sequences to combat variations/limitations in experimental or spin system parameters or design experiments with specific properties typically not covered as easily by standard design procedures. This facilitates straightforward optimization of experiments under consideration of rf and static field inhomogeneities, limitations in available or desired rf field strengths (e.g., for reduction of sample heating), spread in resonance offsets or coupling parameters, variations in spin systems etc. to meet the actual experimental conditions as close as possible. The paper provides a brief account on the relevant theory and in particular the computational interface relevant for optimization of state-to-state transfer (on the density operator level) and the effective Hamiltonian on the level of propagators along with several representative examples within liquid- and solid-state NMR spectroscopy.

  12. Implementation of optimal phase-covariant cloning machines

    SciTech Connect

    Sciarrino, Fabio; De Martini, Francesco

    2007-07-15

    The optimal phase-covariant quantum cloning machine (PQCM) broadcasts the information associated to an input qubit into a multiqubit system, exploiting a partial a priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different innovative schemes to implement the 1{yields}3 PQCM. The method is then generalized to any 1{yields}M machine for an odd value of M by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or nonlinear methods and valid for single photon polarization encoded qubits are discussed.

  13. Barriers to Implementation of Optimal Laboratory Biosafety Practices in Pakistan

    PubMed Central

    Shafaq, Humaira; Hasan, Rumina; Qureshi, Shahida M.; Dojki, Maqboola; Hughes, Molly A.; Zaidi, Anita K. M.; Khan, Erum

    2016-01-01

    The primary goal of biosafety education is to ensure safe practices among workers in biomedical laboratories. Despite several educational workshops by the Pakistan Biological Safety Association (PBSA), compliance with safe practices among laboratory workers remains low. To determine barriers to implementation of recommended biosafety practices among biomedical laboratory workers in Pakistan, we conducted a questionnaire-based survey of participants attending 2 workshops focusing on biosafety practices in Karachi and Lahore in February 2015. Questionnaires were developed by modifying the BARRIERS scale in which respondents are required to rate barriers on a 1-4 scale. Nineteen of the original 29 barriers were included and subcategorized into 4 groups: awareness, material quality, presentation, and workplace barriers. Workshops were attended by 64 participants. Among barriers that were rated as moderate to great barriers by at least 50% of respondents were: lack of time to read biosafety guidelines (workplace subscale), lack of staff authorization to change/improve practice (workplace subscale), no career or self-improvement advantages to the staff for implementing optimal practices (workplace subscale), and unclear practice implications (presentation subscale). A lack of recognition for employees' rights and benefits in the workplace was found to be a predominant reason for a lack of compliance. Based on perceived barriers, substantial improvement in work environment, worker facilitation, and enabling are needed for achieving improved or optimal biosafety practices in Pakistan. PMID:27400192

  14. Barriers to Implementation of Optimal Laboratory Biosafety Practices in Pakistan.

    PubMed

    Shakoor, Sadia; Shafaq, Humaira; Hasan, Rumina; Qureshi, Shahida M; Dojki, Maqboola; Hughes, Molly A; Zaidi, Anita K M; Khan, Erum

    2016-01-01

    The primary goal of biosafety education is to ensure safe practices among workers in biomedical laboratories. Despite several educational workshops by the Pakistan Biological Safety Association (PBSA), compliance with safe practices among laboratory workers remains low. To determine barriers to implementation of recommended biosafety practices among biomedical laboratory workers in Pakistan, we conducted a questionnaire-based survey of participants attending 2 workshops focusing on biosafety practices in Karachi and Lahore in February 2015. Questionnaires were developed by modifying the BARRIERS scale in which respondents are required to rate barriers on a 1-4 scale. Nineteen of the original 29 barriers were included and subcategorized into 4 groups: awareness, material quality, presentation, and workplace barriers. Workshops were attended by 64 participants. Among barriers that were rated as moderate to great barriers by at least 50% of respondents were: lack of time to read biosafety guidelines (workplace subscale), lack of staff authorization to change/improve practice (workplace subscale), no career or self-improvement advantages to the staff for implementing optimal practices (workplace subscale), and unclear practice implications (presentation subscale). A lack of recognition for employees' rights and benefits in the workplace was found to be a predominant reason for a lack of compliance. Based on perceived barriers, substantial improvement in work environment, worker facilitation, and enabling are needed for achieving improved or optimal biosafety practices in Pakistan.

  15. Designing a stencil compiler for the Connection Machine model CM-5

    SciTech Connect

    Brickner, R.G.; Holian, K.; Thiagarajan, B.; Johnsson, S.L. |

    1994-12-31

    In this paper the authors present the design of a stencil compiler for the Connection Machine system CM-5. The stencil compiler will optimize the data motion between processing nodes, minimize the data motion within a node, and minimize the data motion between registers and local memory in a node. The compiler will natively support two-dimensional stencils, but stencils in three dimensions will be automatically decomposed. Lower dimensional stencils are treated as degenerate stencils. The compiler will be integrated as part of the CM Fortran programming system. Much of the compiler code will be adapted from the CM-2/200 stencil compiler, which is part of CMSSL (the Connection Machine Scientific Software Library) Release 3.1 for the CM-2/200, and the compiler will be available as part of the Connection Machine Scientific Software Library (CMSSL) for the CM-5. In addition to setting down design considerations, they report on the implementation status of the stencil compiler. In particular, they discuss optimization strategies and status of code conversion from CM-2/200 to CM-5 architecture, and report on the measured performance of prototype target code which the compiler will generate.

  16. Optimal clinical implementation of the Siemens virtual wedge.

    PubMed

    Walker, C P; Richmond, N D; Lambert, G D

    2003-01-01

    Installation of a modern high-energy Siemens Primus linear accelerator at the Northern Centre for Cancer Treatment (NCCT) provided the opportunity to investigate the optimal clinical implementation of the Siemens virtual wedge filter. Previously published work has concentrated on the production of virtual wedge angles at 15 degrees, 30 degrees, 45 degrees, and 60 degrees as replacements for the Siemens hard wedges of the same nominal angles. However, treatment plan optimization of the dose distribution can be achieved with the Primus, as its control software permits the selection of any virtual wedge angle from 15 degrees to 60 degrees in increments of 1 degrees. The same result can also be produced from a combination of open and 60 degrees wedged fields. Helax-TMS models both of these modes of virtual wedge delivery by the wedge angle and the wedge fraction methods respectively. This paper describes results of timing studies in the planning of optimized patient dose distributions by both methods and in the subsequent treatment delivery procedures. Employment of the wedge fraction method results in the delivery of small numbers of monitor units to the beam's central axis; therefore, wedge profile stability and delivered dose with low numbers of monitor units were also investigated. The wedge fraction was proven to be the most efficient method when the time taken for both planning and treatment delivery were taken into consideration, and is now used exclusively for virtual wedge treatment delivery in Newcastle. It has also been shown that there are no unfavorable dosimetric consequences from its practical implementation.

  17. Optimization of Infobutton Design and Implementation: A Systematic Review.

    PubMed

    Teixeira, Miguel; Cook, David A; Heale, Bret S E; Del Fiol, Guilherme

    2017-08-21

    in clinical settings. Improved content indexing in one study led to improved content retrieval across three health care organizations. Best practice technical approaches to ensure optimal infobutton functionality, design and implementation remain understudied. The HL7 Infobutton standard has supported wide adoption of infobutton functionality among clinical information systems and knowledge resources. Limited evidence supports infobutton enhancements such as links to specific subtopics, configuration of optimal resources for specific tasks and users, and improved indexing and content coverage. Further research is needed to investigate user experience improvements to increase infobutton use and effectiveness. Copyright © 2017. Published by Elsevier Inc.

  18. Optimized evaporation technique for leachate treatment: Small scale implementation.

    PubMed

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature.

  19. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  20. Data Compilation: Its Design and Analysis

    DTIC Science & Technology

    1990-06-14

    design of such programs may be complicated by the additional information needed to specify how to optimally compile input data. We know that with the...90 4 TITL ANDO SUISWU .PP0 "wn DATA COMPILATION: ITS DESIGN ABD ANALYSIS AFOSR-89-0186 4 AT14~S ~;j~61102F 2304/A2 JOHN FRANCO, DANIEL P. FRIEDMAN Oak...SCIENTIFIC REPORT December 1988 to June 1990 Data Compilation: It’s Design and Analysis John Franco, Daniel P. Friedman, Principal Investigators

  1. Python based high-level synthesis compiler

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radosław; Pozniak, Krzysztof; Romaniuk, Ryszard

    2014-11-01

    This paper presents a python based High-Level synthesis (HLS) compiler. The compiler interprets an algorithmic description of a desired behavior written in Python and map it to VHDL. FPGA combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be reconfigured over the lifetime of the system. FPGAs therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Creating parallel programs implemented in FPGAs is not trivial. This article describes design, implementation and first results of created Python based compiler.

  2. Ada Compiler Validation Summary Report: Certificate Number: 910121I1. 11124 TeleSoft, TeleGen2 Ada Cross Development System, Version 4.1, for VAX/VMS to 68k, MicroVAX 3800(Host) to Motorola MVME 133A-20 (MC68020) (Target).

    DTIC Science & Technology

    1991-02-11

    can be used with the compiler or the optimizer (’OPTIIZE). Using the /SQUEEZE qualifier duri -ng compilation causes the intermediate forms to be...implementation- depen- dent characteristics: " Interface (assembly Fortran, Pascal . and C) " List and Page (in context of source/error compiler

  3. Compiler-assisted static checkpoint insertion

    NASA Technical Reports Server (NTRS)

    Long, Junsheng; Fuchs, W. K.; Abraham, Jacob A.

    1992-01-01

    This paper describes a compiler-assisted approach for static checkpoint insertion. Instead of fixing the checkpoint location before program execution, a compiler enhanced polling mechanism is utilized to maintain both the desired checkpoint intervals and reproducible checkpoint 1ocations. The technique has been implemented in a GNU CC compiler for Sun 3 and Sun 4 (Sparc) processors. Experiments demonstrate that the approach provides for stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previously presented compiler assisted dynamic scheme (CATCH) utilizing the system clock.

  4. Testing-Based Compiler Validation for Synchronous Languages

    NASA Technical Reports Server (NTRS)

    Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier

    2014-01-01

    In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.

  5. NONMEM version III implementation on a VAX 9000: a DCL procedure for single-step execution and the unrealized advantage of a vectorizing FORTRAN compiler.

    PubMed

    Vielhaber, J P; Kuhlman, J V; Barrett, J S

    1993-06-01

    There is great interest within the FDA, academia, and the pharmaceutical industry to provide more detailed information about the time course of drug concentration and effect in subjects receiving a drug as part of their overall therapy. Advocates of this effort expect the eventual goal of these endeavors to provide labeling which reflects the experience of drug administration to the entire population of potential recipients. The set of techniques which have been thus far applied to this task has been defined as population approach methodologies. While a consensus view on the usefulness of these techniques is not likely to be formed in the near future, most pharmaceutical companies or individuals who provide kinetic/dynamic support for drug development programs are investigating population approach methods. A major setback in this investigation has been the shortage of computational tools to analyze population data. One such algorithm, NONMEM, supplied by the NONMEM Project Group of the University of California, San Francisco has been widely used and remains the most accessible computational tool to date. The program is distributed to users as FORTRAN 77 source code with instructions for platform customization. Given the memory and compiler requirements of this algorithm and the intensive matrix manipulation required for run convergence and parameter estimation, this program's performance is largely determined by the platform and the FORTRAN compiler used to create the NONMEM executable. Benchmark testing on a VAX 9000 with Digital's FORTRAN (v. 1.2) compiler suggests that this is an acceptable platform. Due to excessive branching within the loops of the NONMEM source code, the vector processing capabilities of the KV900-AA vector processor actually decrease performance. A DCL procedure is given to provide single step execution of this algorithm.

  6. Optimal control of ICU patient discharge: from theory to implementation.

    PubMed

    Mallor, Fermín; Azcárate, Cristina; Barado, Julio

    2015-09-01

    This paper deals with the management of scarce health care resources. We consider a control problem in which the objective is to minimize the rate of patient rejection due to service saturation. The scope of decisions is limited, in terms both of the amount of resources to be used, which are supposed to be fixed, and of the patient arrival pattern, which is assumed to be uncontrollable. This means that the only potential areas of control are speed or completeness of service. By means of queuing theory and optimization techniques, we provide a theoretical solution expressed in terms of service rates. In order to make this theoretical analysis useful for the effective control of the healthcare system, however, further steps in the analysis of the solution are required: physicians need flexible and medically-meaningful operative rules for shortening patient length of service to the degree needed to give the service rates dictated by the theoretical analysis. The main contribution of this paper is to discuss how the theoretical solutions can be transformed into effective management rules to guide doctors' decisions. The study examines three types of rules based on intuitive interpretations of the theoretical solution. Rules are evaluated through implementation in a simulation model. We compare the service rates provided by the different policies with those dictated by the theoretical solution. Probabilistic analysis is also included to support rule validity. An Intensive Care Unit is used to illustrate this control problem. The study focuses on the Markovian case before moving on to consider more realistic LoS distributions (Weibull, Lognormal and Phase-type distribution).

  7. Automatic OPC repair flow: optimized implementation of the repair recipe

    NASA Astrophysics Data System (ADS)

    Bahnas, Mohamed; Al-Imam, Mohamed; Word, James

    2007-10-01

    Virtual manufacturing that is enabled by rapid, accurate, full-chip simulation is a main pillar in achieving successful mask tape-out in the cutting-edge low-k1 lithography. It facilitates detecting printing failures before a costly and time-consuming mask tape-out and wafer print occur. The OPC verification step role is critical at the early production phases of a new process development, since various layout patterns will be suspected that they might to fail or cause performance degradation, and in turn need to be accurately flagged to be fed back to the OPC Engineer for further learning and enhancing in the OPC recipe. At the advanced phases of the process development, there is much less probability of detecting failures but still the OPC Verification step act as the last-line-of-defense for the whole RET implemented work. In recent publication the optimum approach of responding to these detected failures was addressed, and a solution was proposed to repair these defects in an automated methodology and fully integrated and compatible with the main RET/OPC flow. In this paper the authors will present further work and optimizations of this Repair flow. An automated analysis methodology for root causes of the defects and classification of them to cover all possible causes will be discussed. This automated analysis approach will include all the learning experience of the previously highlighted causes and include any new discoveries. Next, according to the automated pre-classification of the defects, application of the appropriate approach of OPC repair (i.e. OPC knob) on each classified defect location can be easily selected, instead of applying all approaches on all locations. This will help in cutting down the runtime of the OPC repair processing and reduce the needed number of iterations to reach the status of zero defects. An output report for existing causes of defects and how the tool handled them will be generated. The report will with help further learning

  8. Implementation and Optimization of an Inverse Photoemission Spectroscopy Setup

    NASA Astrophysics Data System (ADS)

    Gina, Ervin

    Inverse photoemission spectroscopy (IPES) is utilized for determining the unoccupied electron states of materials. It is a complementary technique to the widely used photoemission spectroscopy (PES) as it analyzes what PES cannot, the states above the Fermi energy. This method is essential to investigating the structure of a solid and its states. IPES has a broad range of uses and is only recently being utilized. This thesis describes the setup, calibration and operation of an IPES experiment. The IPES setup consists of an electron gun which emits electrons towards a sample, where photons are released, which are measured in isochromat mode via a photon detector of a set energy bandwidth. By varying the electron energy at the source, a spectrum of the unoccupied density of states can be obtained. Since IPES is not commonly commercially available the design consists of many custom made components. The photon detector operates as a bandpass filter with a mixture of acetone/argon and a CaF2 window setting the cutoff energies. The counter electronics consist of a pre-amplifier, amplifier and analyzer to detect the count rate at each energy level above the Fermi energy. Along with designing the hardware components, a Labview program was written to capture and log the data for further analysis. The software features several operating modes including automated scanning which allows the user to enter the desired scan parameters and the program will scan the sample accordingly. Also implemented in the program is the control of various external components such as the electron gun and high voltage power supply. The new setup was tested for different gas mixtures and an optimum ratio was determined. Subsequently, IPES scans of several sample materials were performed for testing and optimization. A scan of Au was utilized for the determination of the Fermi edge energy and for comparison to literature spectra. The Fermi edge energy was then used in a measurement of indium tin

  9. Large-Scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation

    DTIC Science & Technology

    2016-08-10

    AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation...2016 4.  TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been

  10. An implementable algorithm for the optimal design centering, tolerancing, and tuning problem

    SciTech Connect

    Polak, E.

    1982-05-01

    An implementable master algorithm for solving optimal design centering, tolerancing, and tuning problems is presented. This master algorithm decomposes the original nondifferentiable optimization problem into a sequence of ordinary nonlinear programming problems. The master algorithm generates sequences with accumulation points that are feasible and satisfy a new optimality condition, which is shown to be stronger than the one previously used for these problems.

  11. Livermore Compiler Analysis Loop Suite

    SciTech Connect

    Hornung, R. D.

    2013-03-01

    LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermore Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.

  12. Uranium Location Database Compilation

    EPA Pesticide Factsheets

    EPA has compiled mine location information from federal, state, and Tribal agencies into a single database as part of its investigation into the potential environmental hazards of wastes from abandoned uranium mines in the western United States.

  13. Analytical techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.

  14. An Extensible Open-Source Compiler Infrastructure for Testing

    SciTech Connect

    Quinlan, D; Ur, S; Vuduc, R

    2005-12-09

    Testing forms a critical part of the development process for large-scale software, and there is growing need for automated tools that can read, represent, analyze, and transform the application's source code to help carry out testing tasks. However, the support required to compile applications written in common general purpose languages is generally inaccessible to the testing research community. In this paper, we report on an extensible, open-source compiler infrastructure called ROSE, which is currently in development at Lawrence Livermore National Laboratory. ROSE specifically targets developers who wish to build source-based tools that implement customized analyses and optimizations for large-scale C, C++, and Fortran90 scientific computing applications (on the order of a million lines of code or more). However, much of this infrastructure can also be used to address problems in testing, and ROSE is by design broadly accessible to those without a formal compiler background. This paper details the interactions between testing of applications and the ways in which compiler technology can aid in the understanding of those applications. We emphasize the particular aspects of ROSE, such as support for the general analysis of whole programs, that are particularly well-suited to the testing research community and the scale of the problems that community solves.

  15. Array-Pattern-Match Compiler for Opportunistic Data Analysis

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A computer program has been written to facilitate real-time sifting of scientific data as they are acquired to find data patterns deemed to warrant further analysis. The patterns in question are of a type denoted array patterns, which are specified by nested parenthetical expressions. [One example of an array pattern is ((>3) 0 (not=1)): this pattern matches a vector of at least three elements, the first of which exceeds 3, the second of which is 0, and the third of which does not equal 1.] This program accepts a high-level description of a static array pattern and compiles a highly optimal and compact other program to determine whether any given instance of any data array matches that pattern. The compiler implemented by this program is independent of the target language, so that as new languages are used to write code that processes scientific data, they can easily be adapted to this compiler. This program runs on a variety of different computing platforms. It must be run in conjunction with any one of a number of Lisp compilers that are available commercially or as shareware.

  16. Implementation and optimization of automated dispensing cabinet technology.

    PubMed

    McCarthy, Bryan C; Ferker, Michael

    2016-10-01

    A multifaceted automated dispensing cabinet (ADC) optimization initiative at a large hospital is described. The ADC optimization project, which was launched approximately six weeks after activation of ADCs in 30 patient care unit medication rooms of a newly established adult hospital, included (1) adjustment of par inventory levels (desired on-hand quantities of medications) and par reorder quantities to reduce the risk of ADC supply exhaustion and improve restocking efficiency, (2) expansion of ADC "common stock" (medications assigned to ADC inventories) to increase medication availability at the point of care, and (3) removal of some infrequently prescribed medications from ADCs to reduce the likelihood of product expiration. The purpose of the project was to address organizational concerns regarding widespread ADC medication stockouts, growing reliance on cart-fill medication delivery systems, and suboptimal medication order turnaround times. Leveraging of the ADC technology platform's reporting functionalities for enhanced inventory control yielded a number of benefits, including cost savings resulting from reduced pharmacy technician labor requirements (estimated at $2,728 annually), a substantial reduction in the overall weekly stockout percentage (from 3.2% before optimization to 0.5% eight months after optimization), an improvement in the average medication turnaround time, and estimated cost avoidance of $19,660 attributed to the reduced potential for product expiration. Efforts to optimize ADCs through par level optimization, expansion of common stock, and removal of infrequently used medications reduced pharmacy technician labor, decreased stockout percentages, generated opportunities for cost avoidance, and improved medication turnaround times. Copyright © 2016 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  17. Spacelab user implementation assessment study. Volume 2: Concept optimization

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The integration and checkout activities of Spacelab payloads consist of two major sets of tasks: support functions, and test and operations. The support functions are definitized and the optimized approach for the accomplishment of these functions are delineated. Comparable data are presented for test and operations activities.

  18. Design and Experimental Implementation of Optimal Spacecraft Antenna Slews

    DTIC Science & Technology

    2013-12-01

    any spacecraft antenna configuration. Various software suites were used to perform thorough validation and verification of the Newton -Euler...verification of the Newton -Euler formulation developed herein. The antenna model was then utilized to solve an optimal control problem for a geostationary...DEVELOPING A MULTI-BODY DYNAMIC MODEL ........................................9  A.  THE NEWTON -EULER APPROACH

  19. Endgame implementations for the Efficient Global Optimization (EGO) algorithm

    NASA Astrophysics Data System (ADS)

    Southall, Hugh L.; O'Donnell, Teresa H.; Kaanta, Bryan

    2009-05-01

    Efficient Global Optimization (EGO) is a competent evolutionary algorithm which can be useful for problems with expensive cost functions [1,2,3,4,5]. The goal is to find the global minimum using as few function evaluations as possible. Our research indicates that EGO requires far fewer evaluations than genetic algorithms (GAs). However, both algorithms do not always drill down to the absolute minimum, therefore the addition of a final local search technique is indicated. In this paper, we introduce three "endgame" techniques. The techniques can improve optimization efficiency (fewer cost function evaluations) and, if required, they can provide very accurate estimates of the global minimum. We also report results using a different cost function than the one previously used [2,3].

  20. Implementation and Optimization of a Plasma Beam Combiner at NIF

    NASA Astrophysics Data System (ADS)

    Kirkwood, R. K.; Turnbull, D. P.; London, R. A.; Wilks, S. C.; Michel, P. A.; Dunlop, W. H.; Moody, J. D.; MacGowan, B. J.; Fournier, K. B.

    2015-11-01

    The seeded SBS process that is known to effectively amplify beams in ignition targets has recently been used to design a target to combine the power and energy of many beams of the NIF facility into a single beam by intersecting them in a gas. The demand for high-power beams for a variety of applications at NIF makes a demonstration of this process attractive. We will describe the plan for empirically optimizing a combiner that uses a gas-filled balloon heated by 10 quads of beams, and pumped by 5 additional frequency-tuned quads to amplify a single beam or quad. The final empirical optimization of beam wavelengths will be determined by using up to three colors in each shot. Performance and platform compatibility will also be optimized by considering designs with a CH gas fill that can be fielded at room temperature as well as a He gas fill to minimize absorption in the combiner. The logic, diagnostic configuration, and backscatter risk mitigation from two shots presently planned for NIF will also be described. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  1. Selected photographic techniques, a compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A selection has been made of methods, devices, and techniques developed in the field of photography during implementation of space and nuclear research projects. These items include many adaptations, variations, and modifications to standard hardware and practice, and should prove interesting to both amateur and professional photographers and photographic technicians. This compilation is divided into two sections. The first section presents techniques and devices that have been found useful in making photolab work simpler, more productive, and higher in quality. Section two deals with modifications to and special applications for existing photographic equipment.

  2. Fault-Tree Compiler Program

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1992-01-01

    FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.

  3. HAL/S-FC and HAL/S-360 compiler system program description

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.

  4. Configuring artificial neural networks to implement function optimization

    NASA Astrophysics Data System (ADS)

    Sundaram, Ramakrishnan

    2002-04-01

    Threshold binary networks of the discrete Hopfield-type lead to the efficient retrieval of the regularized least-squares (LS) solution in certain inverse problem formulations. Partitions of these networks are identified based on forms of representation of the data. The objective criterion is optimized using sequential and parallel updates on these partitions. The algorithms consist of minimizing a suboptimal objective criterion in the currently active partition. Once the local minima is attained, an inactive partition is chosen to continue the minimization. This strategy is especially effective when substantial data must be processed by resources which are constrained either in space or available bandwidth.

  5. Implementing size-optimal discrete neural networks requires analog circuitry

    SciTech Connect

    Beiu, V.

    1998-03-01

    Neural networks (NNs) have been experimentally shown to be quite effective in many applications. This success has led researchers to undertake a rigorous analysis of the mathematical properties that enable them to perform so well. It has generated two directions of research: (i) to find existence/constructive proofs for what is now known as the universal approximation problem; (ii) to find tight bounds on the size needed by the approximation problem (or some particular cases). The paper will focus on both aspects, for the particular case when the functions to be implemented are Boolean.

  6. A controller based on Optimal Type-2 Fuzzy Logic: systematic design, optimization and real-time implementation.

    PubMed

    Fayek, H M; Elamvazuthi, I; Perumal, N; Venkatesh, B

    2014-09-01

    A computationally-efficient systematic procedure to design an Optimal Type-2 Fuzzy Logic Controller (OT2FLC) is proposed. The main scheme is to optimize the gains of the controller using Particle Swarm Optimization (PSO), then optimize only two parameters per type-2 membership function using Genetic Algorithm (GA). The proposed OT2FLC was implemented in real-time to control the position of a DC servomotor, which is part of a robotic arm. The performance judgments were carried out based on the Integral Absolute Error (IAE), as well as the computational cost. Various type-2 defuzzification methods were investigated in real-time. A comparative analysis with an Optimal Type-1 Fuzzy Logic Controller (OT1FLC) and a PI controller, demonstrated OT2FLC׳s superiority; which is evident in handling uncertainty and imprecision induced in the system by means of noise and disturbances. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Experimental implementation of an adiabatic quantum optimization algorithm

    NASA Astrophysics Data System (ADS)

    Steffen, Matthias; van Dam, Wim; Hogg, Tad; Breyta, Greg; Chuang, Isaac

    2003-03-01

    A novel quantum algorithm using adiabatic evolution was recently presented by Ed Farhi [1] and Tad Hogg [2]. This algorithm represents a remarkable discovery because it offers new insights into the usefulness of quantum resources. An experimental demonstration of an adiabatic algorithm has remained beyond reach because it requires an experimentally accessible Hamiltonian which encodes the problem and which must also be smoothly varied over time. We present tools to overcome these difficulties by discretizing the algorithm and extending average Hamiltonian techniques [3]. We used these techniques in the first experimental demonstration of an adiabatic optimization algorithm: solving an instance of the MAXCUT problem using three qubits and nuclear magnetic resonance techniques. We show that there exists an optimal run-time of the algorithm which can be predicted using a previously developed decoherence model. [1] E. Farhi et al., quant-ph/0001106 (2000) [2] T. Hogg, PRA, 61, 052311 (2000) [3] W. Rhim, A. Pines, J. Waugh, PRL, 24,218 (1970)

  8. Compiler Acceptance Criteria Guidebook

    DTIC Science & Technology

    1977-05-01

    rrograms generated by a compiler coupled with the expected life of the compiled program (number of times to be used) can often make ? 1 -6 : ! this aspect...concern are! " CPU time per statement or program * core usaae " 1 /0 access time " wait or dead ti-e " disk storage • tape drive mounts In large...left to the specification agency discretion. A prime example is the OS 370 P1eP system. 1 1 -7 0 Level of Expertise Another often neglected cost item is

  9. Local structural modeling for implementation of optimal active damping

    NASA Astrophysics Data System (ADS)

    Blaurock, Carl A.; Miller, David W.

    1993-09-01

    Local controllers are good candidates for active control of flexible structures. Local control generally consists of low order, frequency benign compensators using collocated hardware. Positive real compensators and plant transfer functions ensure that stability margins and performance robustness are high. The typical design consists of an experimentally chosen gain on a fixed form controller such as rate feedback. The resulting compensator performs some combination of damping (dissipating energy) and structural modification (changing the energy flow paths). Recent research into structural impedance matching has shown how to optimize dissipation based on the local behavior of the structure. This paper investigates the possibility of improving performance by influencing global energy flow, using local controllers designed using a global performance metric.

  10. Economic Implementation and Optimization of Secondary Oil Recovery

    SciTech Connect

    Cary D. Brock

    2006-01-09

    The St Mary West Barker Sand Unit (SMWBSU or Unit) located in Lafayette County, Arkansas was unitized for secondary recovery operations in 2002 followed by installation of a pilot injection system in the fall of 2003. A second downdip water injection well was added to the pilot project in 2005 and 450,000 barrels of saltwater has been injected into the reservoir sand to date. Daily injection rates have been improved over initial volumes by hydraulic fracture stimulation of the reservoir sand in the injection wells. Modifications to the injection facilities are currently being designed to increase water injection rates for the pilot flood. A fracture treatment on one of the production wells resulted in a seven-fold increase of oil production. Recent water production and increased oil production in a producer closest to the pilot project indicates possible response to the water injection. The reservoir and wellbore injection performance data obtained during the pilot project will be important to the secondary recovery optimization study for which the DOE grant was awarded. The reservoir characterization portion of the modeling and simulation study is in progress by Strand Energy project staff under the guidance of University of Houston Department of Geosciences professor Dr. Janok Bhattacharya and University of Texas at Austin Department of Petroleum and Geosystems Engineering professor Dr. Larry W. Lake. A geologic and petrophysical model of the reservoir is being constructed from geophysical data acquired from core, well log and production performance histories. Possible use of an outcrop analog to aid in three dimensional, geostatistical distribution of the flow unit model developed from the wellbore data will be investigated. The reservoir model will be used for full-field history matching and subsequent fluid flow simulation based on various injection schemes including patterned water flooding, addition of alkaline surfactant-polymer (ASP) to the injected water

  11. Power-Aware Compiler Controllable Chip Multiprocessor

    NASA Astrophysics Data System (ADS)

    Shikano, Hiroaki; Shirako, Jun; Wada, Yasutaka; Kimura, Keiji; Kasahara, Hironori

    A power-aware compiler controllable chip multiprocessor (CMP) is presented and its performance and power consumption are evaluated with the optimally scheduled advanced multiprocessor (OSCAR) parallelizing compiler. The CMP is equipped with power control registers that change clock frequency and power supply voltage to functional units including processor cores, memories, and an interconnection network. The OSCAR compiler carries out coarse-grain task parallelization of programs and reduces power consumption using architectural power control support and the compiler's power saving scheme. The performance evaluation shows that MPEG-2 encoding on the proposed CMP with four CPUs results in 82.6% power reduction in real-time execution mode with a deadline constraint on its sequential execution time. Furthermore, MP3 encoding on a heterogeneous CMP with four CPUs and four accelerators results in 53.9% power reduction at 21.1-fold speed-up in performance against its sequential execution in the fastest execution mode.

  12. Metallurgical processing: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The items in this compilation, all relating to metallurgical processing, are presented in two sections. The first section includes processes which are general in scope and applicable to a variety of metals or alloys. The second describes the processes that concern specific metals and their alloys.

  13. Realistic nurse-led policy implementation, optimization and evaluation: novel methodological exemplar.

    PubMed

    Noyes, Jane; Lewis, Mary; Bennett, Virginia; Widdas, David; Brombley, Karen

    2014-01-01

    To report the first large-scale realistic nurse-led implementation, optimization and evaluation of a complex children's continuing-care policy. Health policies are increasingly complex, involve multiple Government departments and frequently fail to translate into better patient outcomes. Realist methods have not yet been adapted for policy implementation. Research methodology - Evaluation using theory-based realist methods for policy implementation. An expert group developed the policy and supporting tools. Implementation and evaluation design integrated diffusion of innovation theory with multiple case study and adapted realist principles. Practitioners in 12 English sites worked with Consultant Nurse implementers to manipulate the programme theory and logic of new decision-support tools and care pathway to optimize local implementation. Methods included key-stakeholder interviews, developing practical diffusion of innovation processes using key-opinion leaders and active facilitation strategies and a mini-community of practice. New and existing processes and outcomes were compared for 137 children during 2007-2008. Realist principles were successfully adapted to a shorter policy implementation and evaluation time frame. Important new implementation success factors included facilitated implementation that enabled 'real-time' manipulation of programme logic and local context to best-fit evolving theories of what worked; using local experiential opinion to change supporting tools to more realistically align with local context and what worked; and having sufficient existing local infrastructure to support implementation. Ten mechanisms explained implementation success and differences in outcomes between new and existing processes. Realistic policy implementation methods have advantages over top-down approaches, especially where clinical expertise is low and unlikely to diffuse innovations 'naturally' without facilitated implementation and local optimization. © 2013

  14. The fault-tree compiler

    NASA Technical Reports Server (NTRS)

    Martensen, Anna L.; Butler, Ricky W.

    1987-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.

  15. Evaluation of a multicore-optimized implementation for tomographic reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far.

  16. Development and implementation of a rail current optimization program

    SciTech Connect

    King, T.L.; Dharamshi, R.; Kim, K.; Zhang, J.; Tompkins, M.W.; Anderson, M.A.; Feng, Q.

    1997-01-01

    Efforts are underway to automate the operation of a railgun hydrogen pellet injector for fusion reactor refueling. A plasma armature is employed to avoid the friction produced by a sliding metal armature and, in particular, to prevent high-Z impurities from entering the tokamak. High currents are used to achieve high accelerations, resulting in high plasma temperatures. Consequently, the plasma armature ablates and accumulates material from the pellet and gun barrel. This increases inertial and viscous drag, lowering acceleration. A railgun model has been developed to compute the acceleration in the presence of these losses. In order to quantify these losses, the ablation coefficient, {alpha}, and drag coefficient, C{sub d}, must be determined. These coefficients are estimated based on the pellet acceleration. The sensitivity of acceleration to {alpha} and C{sub d} has been calculated using the model. Once {alpha} and C{sub d} have been determined, their values are applied to the model to compute the appropriate current pulse width. An optimization program was written in LabVIEW software to carry out this procedure. This program was then integrated into the existing code used to operate the railgun system. Preliminary results obtained after test firing the gun indicate that the program computes reasonable values for {alpha} and C{sub d} and calculates realistic pulse widths.

  17. Evaluation of a Multicore-Optimized Implementation for Tomographic Reconstruction

    PubMed Central

    Agulleiro, Jose-Ignacio; Fernández, José Jesús

    2012-01-01

    Tomography allows elucidation of the three-dimensional structure of an object from a set of projection images. In life sciences, electron microscope tomography is providing invaluable information about the cell structure at a resolution of a few nanometres. Here, large images are required to combine wide fields of view with high resolution requirements. The computational complexity of the algorithms along with the large image size then turns tomographic reconstruction into a computationally demanding problem. Traditionally, high-performance computing techniques have been applied to cope with such demands on supercomputers, distributed systems and computer clusters. In the last few years, the trend has turned towards graphics processing units (GPUs). Here we present a detailed description and a thorough evaluation of an alternative approach that relies on exploitation of the power available in modern multicore computers. The combination of single-core code optimization, vector processing, multithreading and efficient disk I/O operations succeeds in providing fast tomographic reconstructions on standard computers. The approach turns out to be competitive with the fastest GPU-based solutions thus far. PMID:23139768

  18. Implementation of Emission Trading in Carbon Dioxide Sequestration Optimization Management

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Duncan, I.

    2013-12-01

    As an effective mid- and long- term solution for large-scale mitigation of industrial CO2 emissions, CO2 capture and sequestration (CCS) has been paid more and more attention in the past decades. A general CCS management system has complex characteristics of multiple emission sources, multiple mitigation technologies, multiple sequestration sites, and multiple project periods. Trade-off exists among numerous environmental, economic, political, and technical factors, leading to varied system features. Sound decision alternatives are thus desired for provide decision supports for decision makers or managers for managing such a CCS system from capture to the final geologic storage phases. Carbon emission trading has been developed as a cost-effective tool for reducing the global greenhouse gas emissions. In this study, a carbon capture and sequestration optimization management model is proposed to address the above issues. The carbon emission trading is integrated into the model, and its impacts on the resulting management decisions are analyzed. A multi-source multi-period case study is provided to justify the applicability of the modeling approach, where uncertainties in modeling parameters are also dealt with.

  19. An Adaptation of the ADA Language for Machine Generated Compilers.

    DTIC Science & Technology

    1980-12-01

    grammar. Machine generated compiler tools such as LEX and YACC, available under the UNIX operating system, are then used to implement the scanner and the...Machine generated compiling tools such as LEX and YACC, available under the UNIX operating system, are then used to implement the scanner and the parser...8a B. YACC AND LEX--------------------------------------- 12 III. UNIX TOOLS---------------------------------------------- 17 A

  20. An Approach for Dynamic Optimization of Prevention Program Implementation in Stochastic Environments

    NASA Astrophysics Data System (ADS)

    Kang, Yuncheol; Prabhu, Vittal

    The science of preventing youth problems has significantly advanced in developing evidence-based prevention program (EBP) by using randomized clinical trials. Effective EBP can reduce delinquency, aggression, violence, bullying and substance abuse among youth. Unfortunately the outcomes of EBP implemented in natural settings usually tend to be lower than in clinical trials, which has motivated the need to study EBP implementations. In this paper we propose to model EBP implementations in natural settings as stochastic dynamic processes. Specifically, we propose Markov Decision Process (MDP) for modeling and dynamic optimization of such EBP implementations. We illustrate these concepts using simple numerical examples and discuss potential challenges in using such approaches in practice.

  1. Metallurgy: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A compilation on the technical uses of various metallurgical processes is presented. Descriptions are given of the mechanical properties of various alloys, ranging from TAZ-813 at 2200 F to investment cast alloy 718 at -320 F. Methods are also described for analyzing some of the constituents of various alloys from optical properties of carbide precipitates in Rene 41 to X-ray spectrographic analysis of the manganese content of high chromium steels.

  2. Valve technology: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A technical compilation on the types, applications and modifications to certain valves is presented. Data cover the following: (1) valves that feature automatic response to stimuli (thermal, electrical, fluid pressure, etc.), (2) modified valves changed by redesign of components to increase initial design effectiveness or give the item versatility beyond its basic design capability, and (3) special purpose valves with limited application as presented, but lending themselves to other uses with minor changes.

  3. Fabrication technology: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented which supplies technical information on the assembly of diverse components into functional assemblies and subassemblies, as well as information on several fasteners and fastening techniques that join components, subassemblies, and complete assemblies to achieve a functional unit. Quick-disconnect fasteners are described, along with several devices and methods for attaching thermal insulators, and for joining and separating objects in the absence of gravity.

  4. FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar

    NASA Astrophysics Data System (ADS)

    Azim, Noor ul; Jun, Wang

    2016-11-01

    Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.

  5. Criteria for Evaluating the Performance of Compilers

    DTIC Science & Technology

    1974-10-01

    id efl !,i% programs, except remove all statement labels. Subtract the ba-c; 162 values obtained by compiling and running a program cont.ziing the... Thesis on Optimal Evaluation Order for Expressions with Redundant Subexpressions, Computer Science Department, Carnegie-Mellon University, Pittsburgh

  6. Implementation of pattern-specific illumination pupil optimization on Step & Scan systems

    NASA Astrophysics Data System (ADS)

    Engelen, Andre; Socha, Robert J.; Hendrickx, Eric; Scheepers, Wieger; Nowak, Frank; Van Dam, Marco; Liebchen, Armin; Faas, Denis A.

    2004-05-01

    Step&Scan systems are pushed towards low k1 applications. Contrast enhancement techniques are crucial for successful implementation of these applications in a production environment. A NA - sigma - illumination mode optimizer and a contrast-based optimization algorithm are implemented in LithoCruiser in order to optimize illumination setting and illumination pupil for a specific repetitive pattern. Calculated illumination pupils have been realized using Diffractive Optical Elements (DOE), which are supported by ASML's AERIAL II illuminator. The qualification of the illumination pupil is done using inline metrology on the ASML Step & Scan system. This paper describes the process of pattern specific illumination optimization for a given mask. Multiple examples will be used to demonstrate the advantage of using non-standard illumination pupils.

  7. A rapid prototyping methodology to implement and optimize image processing algorithms for FPGAs

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed; Niang, Pierre; Grandpierre, Thierry

    2006-02-01

    In this article we present the local operations in image processing based upon spatial 2D discrete convolution. We study different implementation of such local operations. We also present the principles and the design flow of the AAA methodology and its associated CAD software tool for integrated circuit (SynDEx-IC). In this methodology, the algorithm is modeled by Conditioned (if - then - else) and Factorized (Loop) Data Dependence Graph and the optimized implementation is obtained by graph transformations. The AAA/SynDEx-IC is used to specify and to optimize the some digital image filters on FPGA XC2100 board.

  8. Optical implementations of the optimal phase-covariant quantum cloning machine

    SciTech Connect

    Fiurasek, Jaromir

    2003-05-01

    We propose two simple implementations of the optimal symmetric 1{yields}2 phase-covariant cloning machine for qubits. The first scheme is designed for qubits encoded into polarization states of photons and it involves a mixing of two photons on an unbalanced beam splitter. This scheme is probabilistic and the cloning succeeds with the probability 1/3. In the second setup, the qubits are represented by the states of Rydberg atoms and the cloning is accomplished by the resonant interaction of the atoms with a microwave field confined in a high-Q cavity. This latter approach allows for deterministic implementation of the optimal cloning transformation.

  9. Teleportation scheme implementing the universal optimal quantum cloning machine and the universal NOT gate.

    PubMed

    Ricci, M; Sciarrino, F; Sias, C; De Martini, F

    2004-01-30

    By a significant modification of the standard protocol of quantum state teleportation, two processes "forbidden" by quantum mechanics in their exact form, the universal NOT gate and the universal optimal quantum cloning machine, have been implemented contextually and optimally by a fully linear method. In particular, the first experimental demonstration of the tele-UNOT gate, a novel quantum information protocol, has been reported. The experimental results are found in full agreement with theory.

  10. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  11. An implementation of particle swarm optimization to evaluate optimal under-voltage load shedding in competitive electricity markets

    NASA Astrophysics Data System (ADS)

    Hosseini-Bioki, M. M.; Rashidinejad, M.; Abdollahi, A.

    2013-11-01

    Load shedding is a crucial issue in power systems especially under restructured electricity environment. Market-driven load shedding in reregulated power systems associated with security as well as reliability is investigated in this paper. A technoeconomic multi-objective function is introduced to reveal an optimal load shedding scheme considering maximum social welfare. The proposed optimization problem includes maximum GENCOs and loads' profits as well as maximum loadability limit under normal and contingency conditions. Particle swarm optimization (PSO) as a heuristic optimization technique, is utilized to find an optimal load shedding scheme. In a market-driven structure, generators offer their bidding blocks while the dispatchable loads will bid their price-responsive demands. An independent system operator (ISO) derives a market clearing price (MCP) while rescheduling the amount of generating power in both pre-contingency and post-contingency conditions. The proposed methodology is developed on a 3-bus system and then is applied to a modified IEEE 30-bus test system. The obtained results show the effectiveness of the proposed methodology in implementing the optimal load shedding satisfying social welfare by maintaining voltage stability margin (VSM) through technoeconomic analyses.

  12. Optimization of Optical Systems Using Genetic Algorithms: a Comparison Among Different Implementations of The Algorithm

    NASA Astrophysics Data System (ADS)

    López-Medina, Mario E.; Vázquez-Montiel, Sergio; Herrera-Vázquez, Joel

    2008-04-01

    The Genetic Algorithms, GAs, are a method of global optimization that we use in the stage of optimization in the design of optical systems. In the case of optical design and optimization, the efficiency and convergence speed of GAs are related with merit function, crossover operator, and mutation operator. In this study we present a comparison between several genetic algorithms implementations using different optical systems, like achromatic cemented doublet, air spaced doublet and telescopes. We do the comparison varying the type of design parameters and the number of parameters to be optimized. We also implement the GAs using discreet parameters with binary chains and with continuous parameter using real numbers in the chromosome; analyzing the differences in the time taken to find the solution and the precision in the results between discreet and continuous parameters. Additionally, we use different merit function to optimize the same optical system. We present the obtained results in tables, graphics and a detailed example; and of the comparison we conclude which is the best way to implement GAs for design and optimization optical system. The programs developed for this work were made using the C programming language and OSLO for the simulation of the optical systems.

  13. Ada Compiler Validation Implementers’ Guide,

    DTIC Science & Technology

    1980-10-01

    number is only prime if MANTISSA is prime , in which case, ’LARGE is then a Mersenne prime . The following values of MANTISSA give rise to Mersenne ... primes : 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607. (See Knuth Vol 2, p356). The factorizations used in the test programs are as follows: N...727 262 657 * 511 I 28 268 435 455 - 16 385 * 16 383 .29 536 870 911 - 2 304 167 * 233 I 30 1 073 741 823 f 32 769 * 32 767I 31 2 147 483 647 w prime

  14. Fault-Tree Compiler

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Boerschlein, David P.

    1993-01-01

    Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.

  15. An optimized implementation of a fault-tolerant clock synchronization circuit

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    1995-01-01

    A fault-tolerant clock synchronization circuit was designed and tested. A comparison to a previous design and the procedure followed to achieve the current optimization are included. The report also includes a description of the system and the results of tests performed to study the synchronization and fault-tolerant characteristics of the implementation.

  16. Fuzzy logic analysis optimizations for pattern recognition - Implementation and experimental results

    NASA Astrophysics Data System (ADS)

    Hires, Matej; Habiballa, Hashim

    2017-07-01

    The article presents an practical results of optimization of the fuzzy logic analysis method for pattern recognition. The theoretical background of the proposed theory is shown in the former article extending the original fuzzy logic analysis method. This article shows the implementation and experimental verification of the approach.

  17. HAL/S-360 compiler test activity report

    NASA Technical Reports Server (NTRS)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  18. Atomic mass compilation 2012

    SciTech Connect

    Pfeiffer, B.; Venkataramaniah, K.; Czok, U.; Scheidenberger, C.

    2014-03-15

    Atomic mass reflects the total binding energy of all nucleons in an atomic nucleus. Compilations and evaluations of atomic masses and derived quantities, such as neutron or proton separation energies, are indispensable tools for research and applications. In the last decade, the field has evolved rapidly after the advent of new production and measuring techniques for stable and unstable nuclei resulting in substantial ameliorations concerning the body of data and their precision. Here, we present a compilation of atomic masses comprising the data from the evaluation of 2003 as well as the results of new measurements performed. The relevant literature in refereed journals and reports as far as available, was scanned for the period beginning 2003 up to and including April 2012. Overall, 5750 new data points have been collected. Recommended values for the relative atomic masses have been derived and a comparison with the 2003 Atomic Mass Evaluation has been performed. This work has been carried out in collaboration with and as a contribution to the European Nuclear Structure and Decay Data Network of Evaluations.

  19. An optimized ultrasound digital beamformer with dynamic focusing implemented on FPGA.

    PubMed

    Almekkawy, Mohamed; Xu, Jingwei; Chirala, Mohan

    2014-01-01

    We present a resource-optimized dynamic digital beamformer for an ultrasound system based on a field-programmable gate array (FPGA). A comprehensive 64-channel receive beamformer with full dynamic focusing is embedded in the Altera Arria V FPGA chip. To improve spatial and contrast resolution, full dynamic beamforming is implemented by a novel method with resource optimization. This was conceived using the implementation of the delay summation through a bulk (coarse) delay and fractional (fine) delay. The sampling frequency is 40 MHz and the beamformer includes a 240 MHz polyphase filter that enhances the temporal resolution of the system while relaxing the Analog-to-Digital converter (ADC) bandwidth requirement. The results indicate that our 64-channel dynamic beamformer architecture is amenable for a low power FPGA-based implementation in a portable ultrasound system.

  20. Proof-Carrying Code with Correct Compilers

    NASA Technical Reports Server (NTRS)

    Appel, Andrew W.

    2009-01-01

    In the late 1990s, proof-carrying code was able to produce machine-checkable safety proofs for machine-language programs even though (1) it was impractical to prove correctness properties of source programs and (2) it was impractical to prove correctness of compilers. But now it is practical to prove some correctness properties of source programs, and it is practical to prove correctness of optimizing compilers. We can produce more expressive proof-carrying code, that can guarantee correctness properties for machine code and not just safety. We will construct program logics for source languages, prove them sound w.r.t. the operational semantics of the input language for a proved-correct compiler, and then use these logics as a basis for proving the soundness of static analyses.

  1. Galileo Outreach Compilation

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This NASA JPL (Jet Propulsion Laboratory) video production is a compilation of the best short movies and computer simulation/animations of the Galileo spacecraft's journey to Jupiter. A limited number of actual shots are presented of Jupiter and its natural satellites. Most of the video is comprised of computer animations of the spacecraft's trajectory, encounters with the Galilean satellites Io, Europa and Ganymede, as well as their atmospheric and surface structures. Computer animations of plasma wave observations of Ganymede's magnetosphere, a surface gravity map of Io, the Galileo/Io flyby, the Galileo space probe orbit insertion around Jupiter, and actual shots of Jupiter's Great Red Spot are presented. Panoramic views of our Earth (from orbit) and moon (from orbit) as seen from Galileo as well as actual footage of the Space Shuttle/Galileo liftoff and Galileo's space probe separation are also included.

  2. The Study of Cross-layer Optimization for Wireless Rechargeable Sensor Networks Implemented in Coal Mines.

    PubMed

    Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting

    2016-01-28

    Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes' placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper.

  3. Structural Performance’s Optimally Analysing and Implementing Based on ANSYS Technology

    NASA Astrophysics Data System (ADS)

    Han, Na; Wang, Xuquan; Yue, Haifang; Sun, Jiandong; Wu, Yongchun

    2017-06-01

    Computer-aided Engineering (CAE) is a hotspot both in academic field and in modern engineering practice. Analysis System(ANSYS) simulation software for its excellent performance become outstanding one in CAE family, it is committed to the innovation of engineering simulation to help users to shorten the design process, improve product innovation and performance. Aimed to explore a structural performance’s optimally analyzing model for engineering enterprises, this paper introduced CAE and its development, analyzed the necessity for structural optimal analysis as well as the framework of structural optimal analysis on ANSYS Technology, used ANSYS to implement a reinforced concrete slab structural performance’s optimal analysis, which was display the chart of displacement vector and the chart of stress intensity. Finally, this paper compared ANSYS software simulation results with the measured results,expounded that ANSYS is indispensable engineering calculation tools.

  4. The Study of Cross-layer Optimization for Wireless Rechargeable Sensor Networks Implemented in Coal Mines

    PubMed Central

    Ding, Xu; Shi, Lei; Han, Jianghong; Lu, Jingting

    2016-01-01

    Wireless sensor networks deployed in coal mines could help companies provide workers working in coal mines with more qualified working conditions. With the underground information collected by sensor nodes at hand, the underground working conditions could be evaluated more precisely. However, sensor nodes may tend to malfunction due to their limited energy supply. In this paper, we study the cross-layer optimization problem for wireless rechargeable sensor networks implemented in coal mines, of which the energy could be replenished through the newly-brewed wireless energy transfer technique. The main results of this article are two-fold: firstly, we obtain the optimal relay nodes’ placement according to the minimum overall energy consumption criterion through the Lagrange dual problem and KKT conditions; secondly, the optimal strategies for recharging locomotives and wireless sensor networks are acquired by solving a cross-layer optimization problem. The cyclic nature of these strategies is also manifested through simulations in this paper. PMID:26828500

  5. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  6. Evaluation of HAL/S language compilability using SAMSO's Compiler Writing System (CWS)

    NASA Technical Reports Server (NTRS)

    Feliciano, M.; Anderson, H. D.; Bond, J. W., III

    1976-01-01

    NASA/Langley is engaged in a program to develop an adaptable guidance and control software concept for spacecraft such as shuttle-launched payloads. It is envisioned that this flight software be written in a higher-order language, such as HAL/S, to facilitate changes or additions. To make this adaptable software transferable to various onboard computers, a compiler writing system capability is necessary. A joint program with the Air Force Space and Missile Systems Organization was initiated to determine if the Compiler Writing System (CWS) owned by the Air Force could be utilized for this purpose. The present study explores the feasibility of including the HAL/S language constructs in CWS and the effort required to implement these constructs. This will determine the compilability of HAL/S using CWS and permit NASA/Langley to identify the HAL/S constructs desired for their applications. The study consisted of comparing the implementation of the Space Programming Language using CWS with the requirements for the implementation of HAL/S. It is the conclusion of the study that CWS already contains many of the language features of HAL/S and that it can be expanded for compiling part or all of HAL/S. It is assumed that persons reading and evaluating this report have a basic familiarity with (1) the principles of compiler construction and operation, and (2) the logical structure and applications characteristics of HAL/S and SPL.

  7. Implementation and optimization of smart infusion systems: are we reaping the safety benefits?

    PubMed

    Trbovich, Patricia L; Cafazzo, Joseph A; Easty, Anthony C

    2013-01-01

    To address the high incidence of infusion errors, manufacturers have replaced the development of standard infusion pumps with smart pump systems. The implementation and ongoing optimization processes for smart pumps are more complex, as they require larger coordinated efforts with stakeholders throughout the medication process. If improper implementation/optimization processes are followed, hospitals invest in this technology while extracting minimal benefit. We assessed the processes hospitals employed when migrating from standard to smart infusion systems, and the extent to which they leveraged their investments from both a systems and resource perspective. Twenty-nine hospitals in Ontario, Canada, were surveyed that had either implemented smart pump systems or were in the process of implementing, representing a response rate of 69%. Results demonstrated that hospitals purchased smart pumps for reasons other than safety, did not involve a multidisciplinary team during implementation, made little effort to standardize drug concentrations or develop drug libraries and dosing limits, seldom monitored how nurses use the pumps, and failed to ensure wireless connectivity to upgrade protocols and download use data. Consequently, they are failing to realize the safety benefits these systems can provide. © 2011 National Association for Healthcare Quality.

  8. Optimizing local protocols for implementing bipartite nonlocal unitary gates using prior entanglement and classical communication

    SciTech Connect

    Cohen, Scott M.

    2010-06-15

    We present a method of optimizing recently designed protocols for implementing an arbitrary nonlocal unitary gate acting on a bipartite system. These protocols use only local operations and classical communication with the assistance of entanglement, and they are deterministic while also being 'one-shot', in that they use only one copy of an entangled resource state. The optimization minimizes the amount of entanglement needed, and also the amount of classical communication, and it is often the case that less of each of these resources is needed than with an alternative protocol using two-way teleportation.

  9. Implementation of a Point Algorithm for Real-Time Convex Optimization

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Motaghedi, Shui; Carson, John

    2007-01-01

    The primal-dual interior-point algorithm implemented in G-OPT is a relatively new and efficient way of solving convex optimization problems. Given a prescribed level of accuracy, the convergence to the optimal solution is guaranteed in a predetermined, finite number of iterations. G-OPT Version 1.0 is a flight software implementation written in C. Onboard application of the software enables autonomous, real-time guidance and control that explicitly incorporates mission constraints such as control authority (e.g. maximum thrust limits), hazard avoidance, and fuel limitations. This software can be used in planetary landing missions (Mars pinpoint landing and lunar landing), as well as in proximity operations around small celestial bodies (moons, asteroids, and comets). It also can be used in any spacecraft mission for thrust allocation in six-degrees-of-freedom control.

  10. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression.

    PubMed

    Jacob, J Augustin; Kumar, N Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation.

  11. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    Jacob, J. Augustin; Kumar, N. Senthil

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  12. Implementation of a multiblock sensitivity analysis method in numerical aerodynamic shape optimization

    NASA Technical Reports Server (NTRS)

    Lacasse, James M.

    1995-01-01

    A multiblock sensitivity analysis method is applied in a numerical aerodynamic shape optimization technique. The Sensitivity Analysis Domain Decomposition (SADD) scheme which is implemented in this study was developed to reduce the computer memory requirements resulting from the aerodynamic sensitivity analysis equations. Discrete sensitivity analysis offers the ability to compute quasi-analytical derivatives in a more efficient manner than traditional finite-difference methods, which tend to be computationally expensive and prone to inaccuracies. The direct optimization procedure couples CFD analysis based on the two-dimensional thin-layer Navier-Stokes equations with a gradient-based numerical optimization technique. The linking mechanism is the sensitivity equation derived from the CFD discretized flow equations, recast in adjoint form, and solved using direct matrix inversion techniques. This investigation is performed to demonstrate an aerodynamic shape optimization technique on a multiblock domain and its applicability to complex geometries. The objectives are accomplished by shape optimizing two aerodynamic configurations. First, the shape optimization of a transonic airfoil is performed to investigate the behavior of the method in highly nonlinear flows and the effect of different grid blocking strategies on the procedure. Secondly, shape optimization of a two-element configuration in subsonic flow is completed. Cases are presented for this configuration to demonstrate the effect of simultaneously reshaping interfering elements. The aerodynamic shape optimization is shown to produce supercritical type airfoils in the transonic flow from an initially symmetric airfoil. Multiblocking effects the path of optimization while providing similar results at the conclusion. Simultaneous reshaping of elements is shown to be more effective than individual element reshaping due to the inclusion of mutual interference effects.

  13. Voyager Outreach Compilation

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This NASA JPL (Jet Propulsion Laboratory) video presents a collection of the best videos that have been published of the Voyager mission. Computer animation/simulations comprise the largest portion of the video and include outer planetary magnetic fields, outer planetary lunar surfaces, and the Voyager spacecraft trajectory. Voyager visited the four outer planets: Jupiter, Saturn, Uranus, and Neptune. The video contains some live shots of Jupiter (actual), the Earth's moon (from orbit), Saturn (actual), Neptune (actual) and Uranus (actual), but is mainly comprised of computer animations of these planets and their moons. Some of the individual short videos that are compiled are entitled: The Solar System; Voyage to the Outer Planets; A Tour of the Solar System; and the Neptune Encounter. Computerized simulations of Viewing Neptune from Triton, Diving over Neptune to Meet Triton, and Catching Triton in its Retrograde Orbit are included. Several animations of Neptune's atmosphere, rotation and weather features as well as significant discussion of the planet's natural satellites are also presented.

  14. Voyager Outreach Compilation

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This NASA JPL (Jet Propulsion Laboratory) video presents a collection of the best videos that have been published of the Voyager mission. Computer animation/simulations comprise the largest portion of the video and include outer planetary magnetic fields, outer planetary lunar surfaces, and the Voyager spacecraft trajectory. Voyager visited the four outer planets: Jupiter, Saturn, Uranus, and Neptune. The video contains some live shots of Jupiter (actual), the Earth's moon (from orbit), Saturn (actual), Neptune (actual) and Uranus (actual), but is mainly comprised of computer animations of these planets and their moons. Some of the individual short videos that are compiled are entitled: The Solar System; Voyage to the Outer Planets; A Tour of the Solar System; and the Neptune Encounter. Computerized simulations of Viewing Neptune from Triton, Diving over Neptune to Meet Triton, and Catching Triton in its Retrograde Orbit are included. Several animations of Neptune's atmosphere, rotation and weather features as well as significant discussion of the planet's natural satellites are also presented.

  15. Voyager Outreach Compilation

    NASA Astrophysics Data System (ADS)

    1998-09-01

    This NASA JPL (Jet Propulsion Laboratory) video presents a collection of the best videos that have been published of the Voyager mission. Computer animation~ulations comprise the largest portion of the video and include outer planetary magnetic fields, outer planetary lunar surfaces, and the Voyager spacecraft trajectory. Voyager visited the four outer planets: Jupiter, Saturn, Uranus, and Neptune. The video contains some live shots of Jupiter (actual), the Earth's moon (from orbit), Saturn (actual), Neptune (actual) and Uranus (actual), but is mainly comprised of computer animations of these planets and their moons. Some of the individual short videos that are compiled are entitled: The Solar System; Voyage to the Outer Planets; A Tour of the Solar System; and the Neptune Encounter. Computerized simulations of Viewing Neptune from Triton, Diving over Neptune to Meet Triton, and Catching Triton in its Retrograde Orbit are included. Several animations of Neptune's atmosphere, rotation and weather features as well as significant discussion of the planet's natural satellites are also presented.

  16. Trident: An FPGA Compiler Framework for Floating-Point Algorithms.

    SciTech Connect

    Tripp J. L.; Peterson, K. D.; Poznanovic, J. D.; Ahrens, C. M.; Gokhale, M.

    2005-01-01

    Trident is a compiler for floating point algorithms written in C, producing circuits in reconfigurable logic that exploit the parallelism available in the input description. Trident automatically extracts parallelism and pipelines loop bodies using conventional compiler optimizations and scheduling techniques. Trident also provides an open framework for experimentation, analysis, and optimization of floating point algorithms on FPGAs and the flexibility to easily integrate custom floating point libraries.

  17. Minimum Flying Qualities. Volume 3. Program CC’s Implementation of the Human Optimal Control Model

    DTIC Science & Technology

    1990-01-01

    BTIC FILE COPY WRDC-TR-89-3125 Volume III Lfl MINIMUM FLYING QUALITIES _ Volume III: Program CC’s Implementation of the Human Optimal Control Model...technical report has been reviewed and is approved for publica- tion. CAPT MARK J. DETROIT, USAF DAVID K. BOWSER, Chief Control Dynamics Branch... Controls Dynamics Branch Flight Control Division Flight Control Division FOR THE CO.’,ANDER H. MAX DAVIS, Assistant for Research and Technology Flight

  18. The Optimization of Automatically Generated Compilers.

    DTIC Science & Technology

    1987-01-01

    record, having one named field for each attribute. During the parse, the structure tree nodes are dynamically allocated and strung together according to...ibut 𔃻~~~~~~~ i 2,-i]l jti [2],--+i[1 ]./1 ,j, V.--i ]ii[2]. , ,/2]i --i.il ] -+i [2].$,4-2/[2], c r,’ ---.sitd, nd--"e s,, tdh at --+i [l].v ’i[]v...computation (at TWA runtime) of context information to determine that this visit sequence can actually be used. Moreover, the dynamic nature of this decision

  19. Iterative dataset optimization in automated planning: Implementation for breast and rectal cancer radiotherapy.

    PubMed

    Fan, Jiawei; Wang, Jiazhou; Zhang, Zhen; Hu, Weigang

    2017-06-01

    To develop a new automated treatment planning solution for breast and rectal cancer radiotherapy. The automated treatment planning solution developed in this study includes selection of the iterative optimized training dataset, dose volume histogram (DVH) prediction for the organs at risk (OARs), and automatic generation of clinically acceptable treatment plans. The iterative optimized training dataset is selected by an iterative optimization from 40 treatment plans for left-breast and rectal cancer patients who received radiation therapy. A two-dimensional kernel density estimation algorithm (noted as two parameters KDE) which incorporated two predictive features was implemented to produce the predicted DVHs. Finally, 10 additional new left-breast treatment plans are re-planned using the Pinnacle(3) Auto-Planning (AP) module (version 9.10, Philips Medical Systems) with the objective functions derived from the predicted DVH curves. Automatically generated re-optimized treatment plans are compared with the original manually optimized plans. By combining the iterative optimized training dataset methodology and two parameters KDE prediction algorithm, our proposed automated planning strategy improves the accuracy of the DVH prediction. The automatically generated treatment plans using the dose derived from the predicted DVHs can achieve better dose sparing for some OARs without compromising other metrics of plan quality. The proposed new automated treatment planning solution can be used to efficiently evaluate and improve the quality and consistency of the treatment plans for intensity-modulated breast and rectal cancer radiation therapy. © 2017 American Association of Physicists in Medicine.

  20. Implementation of utility-based resource optimization protocols on ITA Sensor Fabric

    NASA Astrophysics Data System (ADS)

    Eswaran, Sharanya; Misra, Archan; Bergamaschi, Flavio; La Porta, Thomas

    2010-04-01

    Utility-based cross-layer optimization is a valuable tool for resource management in mission-oriented wireless sensor networks (WSN). The benefits of this technique include the ability to take application- or mission-level utilities into account and to dynamically adapt to the highly variable environment of tactical WSNs. Recently, we developed a family of distributed protocols which adapts the bandwidth and energy usage in mission-oriented WSN in order to optimally allocate resources among multiple missions, that may have specific demands depending on their priority, and also variable schedules, entering and leaving the network at different times.9-12 In this paper, we illustrate the practical applicability of this family of protocols in tactical networks by implementing one of the protocols, which ensures optimal rate adaptation for congestion control in mission-oriented networks,9 on a real-time 802.11b network using the ITA Sensor Fabric.13 The ITA Sensor Fabric is a middleware infrastructure, developed as part of the International Technology Alliance (ITA) in Network and Information Science,14 to address the challenges in the areas of sensor identification, classification, interoperability and sensor data sharing, dissemination and consumability, commonly present in tactical WSNs.15 Through this implementation, we (i) study the practical challenges arising from the implementation and (ii) provide a proof of concept regarding the applicability of this family of protocols for efficient resource management in tactical WSNs amidst the heterogeneous and dynamic sets of sensors, missions and middle-ware.

  1. A Portable Compiler for the Language C

    DTIC Science & Technology

    1975-05-01

    optimize as desired; this solution rs more likely to be acceptable as a compilation technique. A third solution will be advocated in this paper. The...OlO00,DU STA •F" ♦♦ac: (auto|stat|indir«ctJ: -% epq ɘ,0.0.«’F) STQ .TEMP LDA .TEMP TSX5 .CTOA EAX5 l^VL T

  2. The Platform-Aware Compilation Environment (PACE)

    DTIC Science & Technology

    2012-09-01

    O’Driscoll (Rice) 18. Jeffrey Sandoval (Rice) 19. Kamal Sharma (Rice) 20. Sanket Tavarageri (OSU) 21. Anna Youssefi (Rice) 22. Lily Zhang...Optimal Tile Size Selection," in 21st International Conference on Compiler Construction ( CC 2012), Tallinn, Estonia, March 24 - April 1, 2012. [5] Keith...pp. 168-177. [18] Keith Cooper and Jeffrey Sandoval , "Portable Techniques to Find Effective Memory Hierarchy Parameters," Computer Science

  3. MPEG-2/4 Low-Complexity Advanced Audio Coding Optimization and Implementation on DSP

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Huang, Hao-Yu; Chen, Yen-Lin; Peng, Hsin-Yuan; Huang, Jia-Hsiung

    This study presents several optimization approaches for the MPEG-2/4 Audio Advanced Coding (AAC) Low Complexity (LC) encoding and decoding processes. Considering the power consumption and the peripherals required for consumer electronics, this study adopts the TI OMAP5912 platform for portable devices. An important optimization issue for implementing AAC codec on embedded and mobile devices is to reduce computational complexity and memory consumption. Due to power saving issues, most embedded and mobile systems can only provide very limited computational power and memory resources for the coding process. As a result, modifying and simplifying only one or two blocks is insufficient for optimizing the AAC encoder and enabling it to work well on embedded systems. It is therefore necessary to enhance the computational efficiency of other important modules in the encoding algorithm. This study focuses on optimizing the Temporal Noise Shaping (TNS), Mid/Side (M/S) Stereo, Modified Discrete Cosine Transform (MDCT) and Inverse Quantization (IQ) modules in the encoder and decoder. Furthermore, we also propose an efficient memory reduction approach that provides a satisfactory balance between the reduction of memory usage and the expansion of the encoded files. In the proposed design, both the AAC encoder and decoder are built with fixed-point arithmetic operations and implemented on a DSP processor combined with an ARM-core for peripheral controlling. Experimental results demonstrate that the proposed AAC codec is computationally effective, has low memory consumption, and is suitable for low-cost embedded and mobile applications.

  4. Optimization and implementation of the integer wavelet transform for image coding.

    PubMed

    Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella

    2002-01-01

    This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.

  5. Computational Implementation of Nudged Elastic Band, Rigid Rotation, and Corresponding Force Optimization.

    PubMed

    Herbol, Henry C; Stevenson, James; Clancy, Paulette

    2017-07-11

    The nudged elastic band (NEB) algorithm is the leading method of calculating transition states in chemical systems. However, the current literature lacks adequate guidance for users wishing to implement a key part of NEB, namely, the optimization method. Here, we provide details of this implementation for the following six gradient descent algorithms: steepest descent, quick-min Verlet, FIRE, conjugate gradient, Broyden-Fletcher-Goldfarb-Shanno (BFGS), and limited-memory BFGS (LBFGS). We also construct and implement a new, accelerated backtracking line search method in concert with a partial Procrustes superimposition to improve upon existing methods. Validation is achieved through benchmark calculations of two test cases, the isomerization of CNX and BOX (where X ∈ {H, Li, Na}) and the study of a conformational change within an alanine dipeptide. We also make direct comparisons to the well-established codebase known as the atomic simulation environment.

  6. Multi-GPU implementation of a VMAT treatment plan optimization algorithm

    SciTech Connect

    Tian, Zhen E-mail: Xun.Jia@UTSouthwestern.edu Folkerts, Michael; Tan, Jun; Jia, Xun E-mail: Xun.Jia@UTSouthwestern.edu Jiang, Steve B. E-mail: Xun.Jia@UTSouthwestern.edu; Peng, Fei

    2015-06-15

    Purpose: Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics. Methods: The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H and N) cancer case is

  7. Compilation of Theses Abstracts

    DTIC Science & Technology

    2005-03-01

    Radiological, and Nuclear Terrorist Attacks ........................................................ 73 NATIONAL SECURITY AFFAIRS Races at War: Nationalism...simulations using the mesh toolkit. KEYWORDS: Ad Hoc Networking, MANET, Wireless Mesh Networks, OLSR , DSR, AODV, Tactical Network Topology, OPNET...packets, and is more effective in preventing insider attacks . However, current implementation of EFW has some weaknesses, such as not allowing

  8. Compilation of Theses Abstracts

    DTIC Science & Technology

    2004-12-01

    DSR), destination-sequenced distance vector routing (DSDV) and optimized link state routing ( OLSR ). NS-2 is developed and maintained by the...individual findings to a central server for aggregated analysis. Different scenarios of network attacks and intrusions are planned to investigate the...effectiveness of the distributed system. The network attacks are taken from the M.I.T Lincoln Lab 1999 data sets. The distributed system is subjected to

  9. Implementation of an ANCF beam finite element for dynamic response optimization of elastic manipulators

    NASA Astrophysics Data System (ADS)

    Vohar, B.; Kegl, M.; Ren, Z.

    2008-12-01

    Theoretical and practical aspects of an absolute nodal coordinate formulation (ANCF) beam finite element implementation are considered in the context of dynamic transient response optimization of elastic manipulators. The proposed implementation is based on the introduction of new nodal degrees of freedom, which is achieved by an adequate nonlinear mapping between the original and new degrees of freedom. This approach preserves the mechanical properties of the ANCF beam, but converts it into a conventional finite element so that its nodal degrees of freedom are initially always equal to zero and never depend explicitly on the design variables. Consequently, the sensitivity analysis formulas can be derived in the usual manner, except that the introduced nonlinear mapping has to be taken into account. Moreover, the adjusted element can also be incorporated into general finite element analysis and optimization software in the conventional way. The introduced design variables are related to the cross-section of the beam, to the shape of the (possibly) skeletal structure of the manipulator and to the drive functions. The layered cross-section approach and the design element technique are utilized to parameterize the shape of individual elements and the whole structure. A family of implicit time integration methods is adopted for the response and sensitivity analysis. Based on this assumption, the corresponding sensitivity formulas are derived. Two numerical examples illustrate the performance of the proposed element implementation.

  10. Optimal Pain Assessment in Pediatric Rehabilitation: Implementation of a Nursing Guideline.

    PubMed

    Kingsnorth, Shauna; Joachimides, Nick; Krog, Kim; Davies, Barbara; Higuchi, Kathryn Smith

    2015-12-01

    In Ontario, Canada, the Registered Nurses' Association promotes a Best Practice Spotlight Organization initiative to enhance evidence-based practice. Qualifying organizations are required to implement strategies, evaluate outcomes, and sustain practices aligned with nursing clinical practice guidelines. This study reports on the development and evaluation of a multifaceted implementation strategy to support adoption of a nursing clinical practice guideline on the assessment and management of acute pain in a pediatric rehabilitation and complex continuing care hospital. Multiple approaches were employed to influence behavior, attitudes, and awareness around optimal pain practice (e.g., instructional resources, electronic reminders, audits, and feedback). Four measures were introduced to assess pain in communicating and noncommunicating children as part of a campaign to treat pain as the fifth vital sign. A prospective repeated measures design examined survey and audit data to assess practice aligned with the guideline. The Knowledge and Attitudes Survey (KNAS) was adapted to ensure relevance to the local practice setting and was assessed before and after nurses' participation in three education modules. Audit data included client demographics and pain scores assessed annually over a 3-year window. A final sample of 69 nurses (78% response rate) provided pre-/post-survey data. A total of 108 pediatric surgical clients (younger than 19 years) contributed audit data across the three collection cycles. Significant improvements in nurses' knowledge, attitudes, and behaviors related to optimal pain care for children with disabilities were noted following adoption of the pain clinical practice guideline. Targeted guideline implementation strategies are central to supporting optimal pain practice.

  11. Implementation of natural frequency analysis and optimality criterion design. [computer technique for structural analysis

    NASA Technical Reports Server (NTRS)

    Levy, R.; Chai, K.

    1978-01-01

    A description is presented of an effective optimality criterion computer design approach for member size selection to improve frequency characteristics for moderately large structure models. It is shown that the implementation of the simultaneous iteration method within a natural frequency structural design optimization provides a method which is more efficient in isolating the lowest natural frequency modes than the frequently applied Stodola method. Additional computational advantages are derived by using previously converged eigenvectors at the start of the iterations during the second and the following design cycles. Vectors with random components can be used at the first design cycle, which, in relation to the entire computer time for the design program, results in only a moderate computational penalty.

  12. Optimization of coal product structure in coal plant design expert system and its computer programming implementation

    SciTech Connect

    Yaqun, H.; Shan, L.; Yali, K.; Maixi, L.

    1999-07-01

    The optimization of coal product structure is a main task in coal preparation flowsheet design. The paper thoroughly studies the scheme of coal product structure optimization in coal plant design expert system. By comparing three fitted mathematical models of raw coal washability curve and six models of distribution curve, which simulates gravity coal separation, the optimum ones are obtained. Based on the models, applying coal product profit as an objective function and utilizing method of generalized Lagrange operators to conditionally restrain yield and ash content of coal product, the optimum flowsheet for coal preparation has been finally achieved by the way of Zangwill method, which optimizes coal product structure. It provides an efficient theoretical basis for defining technical plan in coal preparation plant designing. The paper also studies the computer programming development and implementation of coal product structure optimization, applying object oriented programming method, in coal plant design expert system. The overall structure of coal plant design expert system, knowledge expressing mechanism, explaining and reasoning mechanism, as well as knowledge learning mechanism are mentioned in this paper.

  13. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    NASA Astrophysics Data System (ADS)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  14. Implementation of transmission functions for an optimized three-terminal quantum dot heat engine.

    PubMed

    Schiegg, Christian H; Dzierzawa, Michael; Eckern, Ulrich

    2017-03-01

    We consider two modifications of a recently proposed three-terminal quantum dot heat engine. First, we investigate the necessity of the thermalization assumption, namely that electrons are always thermalized by inelastic processes when traveling across the cavity where the heat is supplied. Second, we analyze various arrangements of tunneling-coupled quantum dots in order to implement a transmission function that is superior to the Lorentzian transmission function of a single quantum dot. We show that the maximum power of the heat engine can be improved by about a factor of two, even for a small number of dots, by choosing an optimal structure.

  15. Implementation of transmission functions for an optimized three-terminal quantum dot heat engine

    NASA Astrophysics Data System (ADS)

    Schiegg, Christian H.; Dzierzawa, Michael; Eckern, Ulrich

    2017-03-01

    We consider two modifications of a recently proposed three-terminal quantum dot heat engine. First, we investigate the necessity of the thermalization assumption, namely that electrons are always thermalized by inelastic processes when traveling across the cavity where the heat is supplied. Second, we analyze various arrangements of tunneling-coupled quantum dots in order to implement a transmission function that is superior to the Lorentzian transmission function of a single quantum dot. We show that the maximum power of the heat engine can be improved by about a factor of two, even for a small number of dots, by choosing an optimal structure.

  16. The Optimize Heart Failure Care Program: Initial lessons from global implementation.

    PubMed

    Cowie, Martin R; Lopatin, Yuri M; Saldarriaga, Clara; Fonseca, Cândida; Sim, David; Magaña, Jose Antonio; Albuquerque, Denilson; Trivi, Marcelo; Moncada, Gustavo; González Castillo, Baldomero A; Sánchez, Mario Osvaldo Speranza; Chung, Edward

    2017-02-12

    Hospitalization for heart failure (HF) places a major burden on healthcare services worldwide, and is a strong predictor of increased mortality especially in the first three months after discharge. Though undesirable, hospitalization is an opportunity to optimize HF therapy and advise clinicians and patients about the importance of continued adherence to HF medication and regular monitoring. The Optimize Heart Failure Care Program (www.optimize-hf.com), which has been implemented in 45 countries, is designed to improve outcomes following HF hospitalization through inexpensive initiatives to improve prescription of appropriate drug therapies, patient education and engagement, and post-discharge planning. It includes best practice clinical protocols for local adaptation, pre- and post-discharge checklists, and 'My HF Passport', a printed and smart phone application to improve patient understanding of HF and encourage involvement in care and treatment adherence. Early experience of the Program suggests that factors leading to successful implementation include support from HF specialists or 'local leaders', regular educational meetings for participating healthcare professionals, multidisciplinary collaboration, and full integration of pre- and post-hospital discharge checklists across care services. The Program is helping to raise awareness of HF and generate useful data on current practice. It is showing how good evidence-based care can be achieved through the use of simple clinician and patient-focused tools. Preliminary results suggest that optimization of HF pharmacological therapy is achievable through the Program, with little new investment. Further data collection will lead to a greater understanding of the impact of the Program on HF care and key indicators of success.

  17. Sequential Principal Component Analysis -An Optimal and Hardware-Implementable Transform for Image Compression

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2009-01-01

    This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.

  18. Sequential Principal Component Analysis -An Optimal and Hardware-Implementable Transform for Image Compression

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.

    2009-01-01

    This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for feature extraction / image compression, based on "dominant-term selection" unsupervised learning technique that requires an order-of-magnitude lesser computation and has simpler architecture compared to the state of the art gradient-descent techniques. This algorithm is inherently amenable to a compact, low power and high speed VLSI hardware embodiment. The paper compares the lossless image compression performance of the JPL's SPCA algorithm with the state of the art JPEG2000, widely used due to its simplified hardware implementability. JPEG2000 is not an optimal data compression technique because of its fixed transform characteristics, regardless of its data structure. On the other hand, conventional Principal Component Analysis based transform (PCA-transform) is a data-dependent-structure transform. However, it is not easy to implement the PCA in compact VLSI hardware, due to its highly computational and architectural complexity. In contrast, the JPL's "dominant-term selection" SPCA algorithm allows, for the first time, a compact, low-power hardware implementation of the powerful PCA algorithm. This paper presents a direct comparison of the JPL's SPCA versus JPEG2000, incorporating the Huffman and arithmetic coding for completeness of the data compression operation. The simulation results show that JPL's SPCA algorithm is superior as an optimal data-dependent-transform over the state of the art JPEG2000. When implemented in hardware, this technique is projected to be ideally suited to future NASA missions for autonomous on-board image data processing to improve the bandwidth of communication.

  19. Development and implementation of rotorcraft preliminary design methodology using multidisciplinary design optimization

    NASA Astrophysics Data System (ADS)

    Khalid, Adeel Syed

    Rotorcraft's evolution has lagged behind that of fixed-wing aircraft. One of the reasons for this gap is the absence of a formal methodology to accomplish a complete conceptual and preliminary design. Traditional rotorcraft methodologies are not only time consuming and expensive but also yield sub-optimal designs. Rotorcraft design is an excellent example of a multidisciplinary complex environment where several interdependent disciplines are involved. A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. The design methodology consists of the product and process development cycles. In the product development loop, all the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design

  20. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  1. GALEX 1st Light Compilation

    NASA Image and Video Library

    2003-05-28

    This compilation shows the constellation Hercules, as imaged on May 21 and 22, 2003, by NASA Galaxy Evolution Explorer. The images were captured by the two channels of the spacecraft camera during the mission first light milestone.

  2. MOBILE. A MOBIDIC COBOL COMPILER

    DTIC Science & Technology

    formats, (3) Data design table (DDT) (4) Run 8 table formats (5) Macro instructions and related table formats (6) COBOL compiler output listings (7) Qualification task in Run 1.3 and a description of the Data Name List (DNLA).

  3. Welding and joining: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation is presented of NASA-developed technology in welding and joining. Topics discussed include welding equipment, techniques in welding, general bonding, joining techniques, and clamps and holding fixtures.

  4. Quantum compiling with low overhead

    NASA Astrophysics Data System (ADS)

    Duclos-Cianci, Guillaume; Poulin, David

    2014-03-01

    I will present a scheme to compile complex quantum gates that uses significantly fewer resources than existing schemes. In standard fault-tolerant protocols, a magic state is distilled from noisy resources, and copies of this magic state are then assembled into produced complex gates using the Solovay-Kitaev theorem or variants thereof. In our approach, we instead directly distill magic states associated to complex gates from noisy resources, leading to a reduction of the compiling overhead of several orders of magnitude.

  5. Artificial immune algorithm implementation for optimized multi-axis sculptured surface CNC machining

    NASA Astrophysics Data System (ADS)

    Fountas, N. A.; Kechagias, J. D.; Vaxevanidis, N. M.

    2016-11-01

    This paper presents the results obtained by the implementation of an artificial immune algorithm to optimize standard multi-axis tool-paths applied to machine free-form surfaces. The investigation for its applicability was based on a full factorial experimental design addressing the two additional axes for tool inclination as independent variables whilst a multi-objective response was formulated by taking into consideration surface deviation and tool path time; objectives assessed directly from computer-aided manufacturing environment A standard sculptured part was developed by scratch considering its benchmark specifications and a cutting-edge surface machining tool-path was applied to study the effects of the pattern formulated when dynamically inclining a toroidal end-mill and guiding it towards the feed direction under fixed lead and tilt inclination angles. The results obtained form the series of the experiments were used for the fitness function creation the algorithm was about to sequentially evaluate. It was found that the artificial immune algorithm employed has the ability of attaining optimal values for inclination angles facilitating thus the complexity of such manufacturing process and ensuring full potentials in multi-axis machining modelling operations for producing enhanced CNC manufacturing programs. Results suggested that the proposed algorithm implementation may reduce the mean experimental objective value to 51.5%

  6. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  7. Final report: Compiled MPI. Cost-Effective Exascale Application Development

    SciTech Connect

    Gropp, William Douglas

    2015-12-21

    This is the final report on Compiled MPI: Cost-Effective Exascale Application Development, and summarizes the results under this project. The project investigated runtime enviroments that improve the performance of MPI (Message-Passing Interface) programs; work at Illinois in the last period of this project looked at optimizing data access optimizations expressed with MPI datatypes.

  8. Optimization models and techniques for implementation and pricing of electricity markets

    NASA Astrophysics Data System (ADS)

    Madrigal Martinez, Marcelino

    Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and

  9. Optimizing the implementation of the target motion sampling temperature treatment technique - How fast can it get?

    SciTech Connect

    Tuomas, V.; Jaakko, L.

    2013-07-01

    This article discusses the optimization of the target motion sampling (TMS) temperature treatment method, previously implemented in the Monte Carlo reactor physics code Serpent 2. The TMS method was introduced in [1] and first practical results were presented at the PHYSOR 2012 conference [2]. The method is a stochastic method for taking the effect of thermal motion into account on-the-fly in a Monte Carlo neutron transport calculation. It is based on sampling the target velocities at collision sites and then utilizing the 0 K cross sections at target-at-rest frame for reaction sampling. The fact that the total cross section becomes a distributed quantity is handled using rejection sampling techniques. The original implementation of the TMS requires 2.0 times more CPU time in a PWR pin-cell case than a conventional Monte Carlo calculation relying on pre-broadened effective cross sections. In a HTGR case examined in this paper the overhead factor is as high as 3.6. By first changing from a multi-group to a continuous-energy implementation and then fine-tuning a parameter affecting the conservativity of the majorant cross section, it is possible to decrease the overhead factors to 1.4 and 2.3, respectively. Preliminary calculations are also made using a new and yet incomplete optimization method in which the temperature of the basis cross section is increased above 0 K. It seems that with the new approach it may be possible to decrease the factors even as low as 1.06 and 1.33, respectively, but its functionality has not yet been proven. Therefore, these performance measures should be considered preliminary. (authors)

  10. Computer implementation of analysis and optimization procedures for control-structure interaction problems

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith; Park, K. C.

    1990-01-01

    Implementation aspects of control-structure interaction analysis and optimization by the staggered use of single-discipline analysis modules are discussed. The single-discipline modules include structural analysis, controller synthesis and optimization. The software modularity is maintained by employing a partitioned control-structure interaction analysis procedure, thus avoiding the need for embedding the single-discipline modules into a monolithic program. A software testbed has been constructed as a stand-alone analysis and optimization program and tested for its versatility and software modularity by applying it to the dynamic analysis and preliminary design of a prototype Earth Pointing Satellite. Experience with the in-core testbed program so far demonstrates that the testbed is efficient, preserves software modularity, and enables the analyst to choose a different set of algorithms, control strategies and design parameters via user software interfaces. Thus, the present software architecture is recommended for adoption by control-structure interaction analysts as a preliminary analysis and design tool.

  11. Experimentally-implemented genetic algorithm (Exp-GA): toward fully optimal photovoltaics.

    PubMed

    Zhong, Yan Kai; Fu, Sze Ming; Ju, Nyan Ping; Chen, Po Yu; Lin, Albert

    2015-09-21

    The geometry and dimension design is the most critical part for the success in nano-photonic devices. The choices of the geometrical parameters dramatically affect the device performance. Most of the time, simulation is conducted to locate the suitable geometry, but in many cases simulation can be ineffective. The most pronounced examples are large-area randomized patterns for solar cells, light emitting diode (LED), and thermophtovoltaics (TPV). The large random pattern is nearly impossible to calculate and optimize due to the extended CPU runtime and the memory limitation. Other scenarios that numerical simulations become ineffective include three-dimensional complex structures with anisotropic dielectric response. This leads to extended simulation time especially for the repeated runs during its geometry optimization. In this paper, we show that by incorporating genetic algorithm (GA) into real-world experiments, shortened trial-and-error time can be achieved. More importantly, this scheme can be used for many photonic design problems that are unsuitable for simulation-based optimizations. Moreover, the experimentally implemented genetic algorithm (Exp-GA) has the additional advantage that the resultant objective value is a real one rather than a theoretical one. This prevents the gaps between the modeling and the fabrication due to the process variation or inaccurate numerical models. Using TPV emitters as an example, 22% enhancement in the mean objective value is achieved.

  12. Formulation for a practical implementation of electromagnetic induction coils optimized using stream functions

    NASA Astrophysics Data System (ADS)

    Reed, Mark A.; Scott, Waymond R.

    2016-05-01

    Continuous-wave (CW) electromagnetic induction (EMI) systems used for subsurface sensing typically employ separate transmit and receive coils placed in close proximity. The closeness of the coils is desirable for both packaging and object pinpointing; however, the coils must have as little mutual coupling as possible. Otherwise, the signal from the transmit coil will couple into the receive coil, making target detection difficult or impossible. Additionally, mineralized soil can be a significant problem when attempting to detect small amounts of metal because the soil effectively couples the transmit and receive coils. Optimization of wire coils to improve their performance is difficult but can be made possible through a stream-function representation and the use of partially convex forms. Examples of such methods have been presented previously, but these methods did not account for certain practical issues with coil implementation. In this paper, the power constraint introduced into the optimization routine is modified so that it does not penalize areas of high current. It does this by representing the coils as plates carrying surface currents and adjusting the sheet resistance to be inversely proportional to the current, which is a good approximation for a wire-wound coil. Example coils are then optimized for minimum mutual coupling, maximum sensitivity, and minimum soil response at a given height with both the earlier, constant sheet resistance and the new representation. The two sets of coils are compared both to each other and other common coil types to show the method's viability.

  13. TUNE: Compiler-Directed Automatic Performance Tuning

    SciTech Connect

    Hall, Mary

    2014-09-18

    This project has developed compiler-directed performance tuning technology targeting the Cray XT4 Jaguar system at Oak Ridge, which has multi-core Opteron nodes with SSE-3 SIMD extensions, and the Cray XE6 Hopper system at NERSC. To achieve this goal, we combined compiler technology for model-guided empirical optimization for memory hierarchies with SIMD code generation, which have been developed by the PIs over the past several years. We examined DOE Office of Science applications to identify performance bottlenecks and apply our system to computational kernels that operate on dense arrays. Our goal for this performance-tuning technology has been to yield hand-tuned levels of performance on DOE Office of Science computational kernels, while allowing application programmers to specify their computations at a high level without requiring manual optimization. Overall, we aim to make our technology for SIMD code generation and memory hierarchy optimization a crucial component of high-productivity Petaflops computing through a close collaboration with the scientists in national laboratories.

  14. Ada compiler validation summary report: Cray Research, Inc. , Cray Ada Compiler, Version 1. 1 Cray X-MP (Host Target), 890523W1. 10080

    SciTech Connect

    Not Available

    1989-05-23

    This Validation Summary Report describes the extent to which a specific Ada compiler conforms to the Ada Standard, ANSI/MIL-STD-1815A. The report explains all technical terms used within it and thoroughly reports the results of testing this compiler using the Ada Compiler Validation Capability. An Ada compiler must be implemented according to the Ada Standard, and any implementation-dependent features must conform to the requirements of the Ada Standard. The Ada Standard must be implemented in its entirety, and nothing can be implemented that is not in the Standard. Even though all validated Ada compilers conform to the Ada Standard, it must be understood that some differences do exist between implementations. The Ada Standard permits some implementation dependencies - for example, the maximum length of identifiers or the maximum values of integer types. Other differences between compilers result from the characteristics of particular operating systems, hardware, or implementation strategies. All the dependencies observed during the process of testing this compiler are given in this report. The information in this report is derived from the test results produced during validation testing. The validation process includes submitting a suite of standardized tests, the ACVC, as inputs to an Ada compiler and evaluating the results.

  15. Ada compiler validation summary report. Cray Research, Inc. , Cray Ada Compiler, Version 1. 1, Cray-2, (Host Target), 890523W1. 10081

    SciTech Connect

    Not Available

    1989-05-23

    This Validation Summary Report describes the extent to which a specific Ada compiler conforms to the Ada Standard, ANSI-MIL-STD-1815A. The report explains all technical terms used within it and thoroughly reports the results of testing this compiler using the Ada Compiler Validation Capability. An Ada compiler must be implemented according to the Ada Standard, and any implementation-dependent features must conform to the requirements of the Ada Standard. The Ada Standard must be implemented in its entirety, and nothing can be implemented that is not in the Standard. Even though all validated Ada compilers conform to the Ada Standard, it must be understood that some differences do exist between implementations. The Ada Standard permits some implementation dependencies - for example, the maximum length of identifiers or the maximum values of integer types. Other differences between compilers result from the characteristics of particular operating systems, hardware, or implementation strategies. All the dependencies observed during the process of testing this compiler are given in this report. The information in this report is derived from the test results produced during validation testing. The validation process includes submitting a suite of standardized tests, the ACVC, as inputs to an Ada compiler and evaluating the results.

  16. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  17. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Zheng, Jason Xin; Nguyen, Kayla; He, Yutao

    2010-01-01

    Multirate (decimation/interpolation) filters are among the essential signal processing components in spaceborne instruments where Finite Impulse Response (FIR) filters are often used to minimize nonlinear group delay and finite-precision effects. Cascaded (multi-stage) designs of Multi-Rate FIR (MRFIR) filters are further used for large rate change ratio, in order to lower the required throughput while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this paper, an alternative representation and implementation technique, called TD-MRFIR (Thread Decomposition MRFIR), is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. Each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. The technical details of TD-MRFIR will be explained, first showing its applicability to the implementation of downsampling, upsampling, and resampling FIR filters, and then describing a general strategy to optimally allocate the number of filter taps. A particular FPGA design of multi-stage TD-MRFIR for the L-band radar of NASA's SMAP (Soil Moisture Active Passive) instrument is demonstrated; and its implementation results in several targeted FPGA devices are summarized in terms of the functional (bit width, fixed-point error) and performance (time closure, resource usage, and power estimation) parameters.

  18. Supporting Binary Compatibility with Static Compilation

    DTIC Science & Technology

    2005-01-01

    compiling Java pro- grams are Just - In - Time (JIT) compilation (e.g. Sun Hotspot [29], Cacao [17], OpenJIT [24], shuJIT [28], vanilla Jalapeno [1]) and...plished using run-time compilation techniques: Just - in - time compilers generate code for classes at run-time. During the run-time compilation, a...2000. [15] K. Ishizaki, M. Kawahito, T. Yasue, H. Komatsu, and T. Nakatani. A study of devirtualization techniques for Java Just - In - Time compiler. In

  19. Onboard optimized hardware implementation of JPEG-LS encoder based on FPGA

    NASA Astrophysics Data System (ADS)

    Wei, Wen; Lei, Jie; Li, Yunsong

    2012-10-01

    A novel hardware implementation of JPEG-LS Encoder based on FPGA is introduced in this paper. Using a look-ahead technique, the critical delay paths of LOCO-I algorithm, such as feedback-loop circuit of parameters updating, are improved. Then an optimized architecture of JPEG-LS Encoder is proposed. Especially, run-mode encode process of JPEG-LS is covered in the architecture as well. Experiment results show that the circuit complexity and memory consumption of the proposed structure are much lower, while the data processing speed is much higher than some other available structures. So it is very suited for applying high-speed lossless compression of satellite sensing image onboard.

  20. Learning from colleagues about healthcare IT implementation and optimization: lessons from a medical informatics listserv.

    PubMed

    Adams, Martha B; Kaplan, Bonnie; Sobko, Heather J; Kuziemsky, Craig; Ravvaz, Kourosh; Koppel, Ross

    2015-01-01

    Communication among medical informatics communities can suffer from fragmentation across multiple forums, disciplines, and subdisciplines; variation among journals, vocabularies and ontologies; cost and distance. Online communities help overcome these obstacles, but may become onerous when listservs are flooded with cross-postings. Rich and relevant content may be ignored. The American Medical Informatics Association successfully addressed these problems when it created a virtual meeting place by merging the membership of four working groups into a single listserv known as the "Implementation and Optimization Forum." A communication explosion ensued, with thousands of interchanges, hundreds of topics, commentaries from "notables," neophytes, and students--many from different disciplines, countries, traditions. We discuss the listserv's creation, illustrate its benefits, and examine its lessons for others. We use examples from the lively, creative, deep, and occasionally conflicting discussions of user experiences--interchanges about medication reconciliation, open source strategies, nursing, ethics, system integration, and patient photos in the EMR--all enhancing knowledge, collegiality, and collaboration.

  1. Extending R packages to support 64-bit compiled code: An illustration with spam64 and GIMMS NDVI3g data

    NASA Astrophysics Data System (ADS)

    Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard

    2017-07-01

    Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.

  2. Compilation of small RNA sequences.

    PubMed

    Shumyatsky, G; Reddy, R

    1992-05-11

    This is an update containing small RNA sequences published during 1991. Approximately two hundred small RNA sequences are available in this and earlier compilations. The hard copy print out of this set will be available directly from us (inquiries should be addressed to R. Reddy). These files are also available on GenBank computer. Sequences from various sources covered in earlier compilations (see Reddy, R. Nucl. Acids Res. 16:r71; Reddy, R. and Gupta, S. Nucl Acids Res. 1990 Supplement, 18:2231 and 1991 Supplement, 19:2073) are not included in this update but are listed below.

  3. On the implementation of an automated acoustic output optimization algorithm for subharmonic aided pressure estimation

    PubMed Central

    Dave, J. K.; Halldorsdottir, V. G.; Eisenbrey, J. R.; Merton, D. A.; Liu, J. B.; Machado, P.; Zhao, H.; Park, S.; Dianis, S.; Chalek, C. L.; Thomenius, K. E.; Brown, D. B.; Forsberg, F.

    2013-01-01

    Incident acoustic output (IAO) dependent subharmonic signal amplitudes from ultrasound contrast agents can be categorized into occurrence, growth or saturation stages. Subharmonic aided pressure estimation (SHAPE) is a technique that utilizes growth stage subharmonic signal amplitudes for hydrostatic pressure estimation. In this study, we developed an automated IAO optimization algorithm to identify the IAO level eliciting growth stage subharmonic signals and also studied the effect of pulse length on SHAPE. This approach may help eliminate the problems of acquiring and analyzing the data offline at all IAO levels as was done in previous studies and thus, pave the way for real-time clinical pressure monitoring applications. The IAO optimization algorithm was implemented on a Logiq 9 (GE Healthcare, Milwaukee, WI) scanner interfaced with a computer. The optimization algorithm stepped the ultrasound scanner from 0 to 100 % IAO. A logistic equation fitting function was applied with the criterion of minimum least squared error between the fitted subharmonic amplitudes and the measured subharmonic amplitudes as a function of the IAO levels and the optimum IAO level was chosen corresponding to the inflection point calculated from the fitted data. The efficacy of the optimum IAO level was investigated for in vivo SHAPE to monitor portal vein (PV) pressures in 5 canines and was compared with the performance of IAO levels, below and above the optimum IAO level, for 4, 8 and 16 transmit cycles. The canines received a continuous infusion of Sonazoid microbubbles (1.5 μl/kg/min; GE Healthcare, Oslo, Norway). PV pressures were obtained using a surgically introduced pressure catheter (Millar Instruments, Inc., Houston, TX) and were recorded before and after increasing PV pressures. The experiments showed that optimum IAO levels for SHAPE in the canines ranged from 6 to 40 %. The best correlation between changes in PV pressures and in subharmonic amplitudes (r = -0.76; p = 0

  4. Advanced compilation techniques in the PARADIGM compiler for distributed-memory multicomputers

    NASA Technical Reports Server (NTRS)

    Su, Ernesto; Lain, Antonio; Ramaswamy, Shankar; Palermo, Daniel J.; Hodges, Eugene W., IV; Banerjee, Prithviraj

    1995-01-01

    The PARADIGM compiler project provides an automated means to parallelize programs, written in a serial programming model, for efficient execution on distributed-memory multicomputers. .A previous implementation of the compiler based on the PTD representation allowed symbolic array sizes, affine loop bounds and array subscripts, and variable number of processors, provided that arrays were single or multi-dimensionally block distributed. The techniques presented here extend the compiler to also accept multidimensional cyclic and block-cyclic distributions within a uniform symbolic framework. These extensions demand more sophisticated symbolic manipulation capabilities. A novel aspect of our approach is to meet this demand by interfacing PARADIGM with a powerful off-the-shelf symbolic package, Mathematica. This paper describes some of the Mathematica routines that performs various transformations, shows how they are invoked and used by the compiler to overcome the new challenges, and presents experimental results for code involving cyclic and block-cyclic arrays as evidence of the feasibility of the approach.

  5. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  6. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  7. A Concept and Implementation of Optimized Operations of Airport Surface Traffic

    NASA Technical Reports Server (NTRS)

    Jung, Yoon C.; Hoang, Ty; Montoya, Justin; Gupta, Gautam; Malik, Waqar; Tobias, Leonard

    2010-01-01

    This paper presents a new concept of optimized surface operations at busy airports to improve the efficiency of taxi operations, as well as reduce environmental impacts. The suggested system architecture consists of the integration of two decoupled optimization algorithms. The Spot Release Planner provides sequence and timing advisories to tower controllers for releasing departure aircraft into the movement area to reduce taxi delay while achieving maximum throughput. The Runway Scheduler provides take-off sequence and arrival runway crossing sequence to the controllers to maximize the runway usage. The description of a prototype implementation of this integrated decision support tool for the airport control tower controllers is also provided. The prototype decision support tool was evaluated through a human-in-the-loop experiment, where both the Spot Release Planner and Runway Scheduler provided advisories to the Ground and Local Controllers. Initial results indicate the average number of stops made by each departure aircraft in the departure runway queue was reduced by more than half when the controllers were using the advisories, which resulted in reduced taxi times in the departure queue.

  8. Design and implementation of an automated compound management system in support of lead optimization.

    PubMed

    Quintero, Catherine; Kariv, Ilona

    2009-06-01

    To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.

  9. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  10. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    NASA Technical Reports Server (NTRS)

    Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.

    2000-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  11. Parallel implementation and performance optimization of the configuration-interaction method

    DOE PAGES

    Shan, Hongzhang; Williams, Samuel; Johnson, Calvin; ...

    2015-11-20

    The configuration-interaction (CI) method, long a popular approach to describe quantum many-body systems, is cast as a very large sparse matrix eigenpair problem with matrices whose dimension can exceed one billion. Such formulations place high demands on memory capacity and memory bandwidth - - two quantities at a premium today. In this paper, we describe an efficient, scalable implementation, BIGSTICK, which, by factorizing both the basis and the interaction into two levels, can reconstruct the nonzero matrix elements on the fly, reduce the memory requirements by one or two orders of magnitude, and enable researchers to trade reduced resources formore » increased computational time. We optimize BIGSTICK on two leading HPC platforms - - the Cray XC30 and the IBM Blue Gene/Q. Specifically, we not only develop an empirically-driven load balancing strategy that can evenly distribute the matrix-vector multiplication across 256K threads, we also developed techniques that improve the performance of the Lanczos reorthogonalization. Combined, these optimizations improved performance by 1.3-8× depending on platform and configuration.« less

  12. Parallel implementation and performance optimization of the configuration-interaction method

    SciTech Connect

    Shan, Hongzhang; Williams, Samuel; Johnson, Calvin; McElvain, Kenneth; Ormand, W. Erich

    2015-11-20

    The configuration-interaction (CI) method, long a popular approach to describe quantum many-body systems, is cast as a very large sparse matrix eigenpair problem with matrices whose dimension can exceed one billion. Such formulations place high demands on memory capacity and memory bandwidth - - two quantities at a premium today. In this paper, we describe an efficient, scalable implementation, BIGSTICK, which, by factorizing both the basis and the interaction into two levels, can reconstruct the nonzero matrix elements on the fly, reduce the memory requirements by one or two orders of magnitude, and enable researchers to trade reduced resources for increased computational time. We optimize BIGSTICK on two leading HPC platforms - - the Cray XC30 and the IBM Blue Gene/Q. Specifically, we not only develop an empirically-driven load balancing strategy that can evenly distribute the matrix-vector multiplication across 256K threads, we also developed techniques that improve the performance of the Lanczos reorthogonalization. Combined, these optimizations improved performance by 1.3-8× depending on platform and configuration.

  13. Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of Kalman filters.

    PubMed

    Denève, Sophie; Duhamel, Jean-René; Pouget, Alexandre

    2007-05-23

    Several behavioral experiments suggest that the nervous system uses an internal model of the dynamics of the body to implement a close approximation to a Kalman filter. This filter can be used to perform a variety of tasks nearly optimally, such as predicting the sensory consequence of motor action, integrating sensory and body posture signals, and computing motor commands. We propose that the neural implementation of this Kalman filter involves recurrent basis function networks with attractor dynamics, a kind of architecture that can be readily mapped onto cortical circuits. In such networks, the tuning curves to variables such as arm velocity are remarkably noninvariant in the sense that the amplitude and width of the tuning curves of a given neuron can vary greatly depending on other variables such as the position of the arm or the reliability of the sensory feedback. This property could explain some puzzling properties of tuning curves in the motor and premotor cortex, and it leads to several new predictions.

  14. Implementation and optimization of ultrasound signal processing algorithms on mobile GPU

    NASA Astrophysics Data System (ADS)

    Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong

    2014-03-01

    A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.

  15. Programming cells: towards an automated 'Genetic Compiler'.

    PubMed

    Clancy, Kevin; Voigt, Christopher A

    2010-08-01

    One of the visions of synthetic biology is to be able to program cells using a language that is similar to that used to program computers or robotics. For large genetic programs, keeping track of the DNA on the level of nucleotides becomes tedious and error prone, requiring a new generation of computer-aided design (CAD) software. To push the size of projects, it is important to abstract the designer from the process of part selection and optimization. The vision is to specify genetic programs in a higher-level language, which a genetic compiler could automatically convert into a DNA sequence. Steps towards this goal include: defining the semantics of the higher-level language, algorithms to select and assemble parts, and biophysical methods to link DNA sequence to function. These will be coupled to graphic design interfaces and simulation packages to aid in the prediction of program dynamics, optimize genes, and scan projects for errors.

  16. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    SciTech Connect

    Amarasinghe, Saman

    2015-03-27

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for different convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.

  17. Recommendations for a Retargetable Compiler.

    DTIC Science & Technology

    1980-03-01

    Compiler Project", Computer Science Department, Carnegie-Mellon University, (Feb. 1979), CMU-CS-79-105 LRS74 Lewis , P. M., Rosencrantz, D. J. and Stearns, R...34Machine-Independent Register Allocation", SIGPLAN Notices 14, 8, (Aug. 1979). Ter78 Terman , Christopher J.: "The Specification of Code Generation

  18. 1988 Bulletin compilation and index

    SciTech Connect

    1989-02-01

    This document is published to provide current information about the national program for managing spent fuel and high-level radioactive waste. This document is a compilation of issues from the 1988 calendar year. A table of contents and one index have been provided to assist in finding information.

  19. Compiler validates units and dimensions

    NASA Technical Reports Server (NTRS)

    Levine, F. E.

    1980-01-01

    Software added to compiler for automated test system for Space Shuttle decreases computer run errors by providing offline validation of engineering units used system command programs. Validation procedures are general, though originally written for GOAL, a free-form language that accepts "English-like" statements, and may be adapted to other programming languages.

  20. ACS Compiles Chemical Manpower Data

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1975

    1975-01-01

    Describes a publication designed to serve as a statistical base from which various groups can develop policy recommendations on chemical manpower. This new series will be the first official effort by the society to compile, correlate, and present all data relevant to the economic status of chemists. (Author/GS)

  1. ACS Compiles Chemical Manpower Data

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1975

    1975-01-01

    Describes a publication designed to serve as a statistical base from which various groups can develop policy recommendations on chemical manpower. This new series will be the first official effort by the society to compile, correlate, and present all data relevant to the economic status of chemists. (Author/GS)

  2. Optimization of the Implementation of Renewable Resources in a Municipal Electric Utility in Arizona

    NASA Astrophysics Data System (ADS)

    Cadorin, Anthony

    A municipal electric utility in Mesa, Arizona with a peak load of approximately 85 megawatts (MW) was analyzed to determine how the implementation of renewable resources (both wind and solar) would affect the overall cost of energy purchased by the utility. The utility currently purchases all of its energy through long term energy supply contracts and does not own any generation assets and so optimization was achieved by minimizing the overall cost of energy while adhering to specific constraints on how much energy the utility could purchase from the short term energy market. Scenarios were analyzed for a five percent and a ten percent penetration of renewable energy in the years 2015 and 2025. Demand Side Management measures (through thermal storage in the City's district cooling system, electric vehicles, and customers' air conditioning improvements) were evaluated to determine if they would mitigate some of the cost increases that resulted from the addition of renewable resources. In the 2015 simulation, wind energy was less expensive than solar to integrate to the supply mix. When five percent of the utility's energy requirements in 2015 are met by wind, this caused a 3.59% increase in the overall cost of energy. When that five percent is met by solar in 2015, it is estimated to cause a 3.62% increase in the overall cost of energy. A mix of wind and solar in 2015 caused a lower increase in the overall cost of energy of 3.57%. At the ten percent implementation level in 2015, solar, wind, and a mix of solar and wind caused increases of 7.28%, 7.51% and 7.27% respectively in the overall cost of energy. In 2025, at the five percent implementation level, wind and solar caused increases in the overall cost of energy of 3.07% and 2.22% respectively. In 2025, at the ten percent implementation level, wind and solar caused increases in the overall cost of energy of 6.23% and 4.67% respectively. Demand Side Management reduced the overall cost of energy by approximately 0

  3. Yes! An object-oriented compiler compiler (YOOCC)

    SciTech Connect

    Avotins, J.; Mingins, C.; Schmidt, H.

    1995-12-31

    Grammar-based processor generation is one of the most widely studied areas in language processor construction. However, there have been very few approaches to date that reconcile object-oriented principles, processor generation, and an object-oriented language. Pertinent here also. is that currently to develop a processor using the Eiffel Parse libraries requires far too much time to be expended on tasks that can be automated. For these reasons, we have developed YOOCC (Yes! an Object-Oriented Compiler Compiler), which produces a processor framework from a grammar using an enhanced version of the Eiffel Parse libraries, incorporating the ideas hypothesized by Meyer, and Grape and Walden, as well as many others. Various essential changes have been made to the Eiffel Parse libraries. Examples are presented to illustrate the development of a processor using YOOCC, and it is concluded that the Eiffel Parse libraries are now not only an intelligent, but also a productive option for processor construction.

  4. Obtaining correct compile results by absorbing mismatches between data types representations

    SciTech Connect

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2016-10-04

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  5. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-03-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  6. Implementation

    EPA Pesticide Factsheets

    Describes elements for the set of activities to ensure that control strategies are put into effect and that air quality goals and standards are fulfilled, permitting programs, and additional resources related to implementation under the Clean Air Act.

  7. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.

    1993-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.

  8. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing

  9. Electronic control circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A compilation of technical R and D information on circuits and modular subassemblies is presented as a part of a technology utilization program. Fundamental design principles and applications are given. Electronic control circuits discussed include: anti-noise circuit; ground protection device for bioinstrumentation; temperature compensation for operational amplifiers; hybrid gatling capacitor; automatic signal range control; integrated clock-switching control; and precision voltage tolerance detector.

  10. Cables and connectors: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented that reflects the uses, adaptation, and maintenance plus service, that are innovations derived from problem solutions in the space R and D programs, both in house and by NASA and AEC contractors. Data cover: (1) technology revelant to the employment of flat conductor cables and their adaptation to and within conventional systems, (2) connectors and various adaptations, and (3) maintenance and service technology, and shop hints useful in the installation and care of cables and connectors.

  11. Optimizing societal benefit using a systems engineering approach for implementation of the GEOSS space segment

    NASA Astrophysics Data System (ADS)

    Killough, Brian D., Jr.; Sandford, Stephen P.; Cecil, L. DeWayne; Stover, Shelley; Keith, Kim

    2008-12-01

    The Group on Earth Observations (GEO) is driving a paradigm shift in the Earth Observation community, refocusing Earth observing systems on GEO Societal Benefit Areas (SBA). Over the short history of space-based Earth observing systems most decisions have been made based on improving our scientific understanding of the Earth with the implicit assumption that this would serve society well in the long run. The space agencies responsible for developing the satellites used for global Earth observations are typically science driven. The innovation of GEO is the call for investments by space agencies to be driven by global societal needs. This paper presents the preliminary findings of an analysis focused on the observational requirements of the GEO Energy SBA. The analysis was performed by the Committee on Earth Observation Satellites (CEOS) Systems Engineering Office (SEO) which is responsible for facilitating the development of implementation plans that have the maximum potential for success while optimizing the benefit to society. The analysis utilizes a new taxonomy for organizing requirements, assesses the current gaps in spacebased measurements and missions, assesses the impact of the current and planned space-based missions, and presents a set of recommendations.

  12. Optimizing societal benefit using a systems engineering approach for implementation of the GEOSS space segment

    NASA Astrophysics Data System (ADS)

    Killough, Brian D., Jr.; Sandford, Stephen P.; Cecil, L. DeWayne; Stover, Shelley; Keith, Kim

    2009-01-01

    The Group on Earth Observations (GEO) is driving a paradigm shift in the Earth Observation community, refocusing Earth observing systems on GEO Societal Benefit Areas (SBA). Over the short history of space-based Earth observing systems most decisions have been made based on improving our scientific understanding of the Earth with the implicit assumption that this would serve society well in the long run. The space agencies responsible for developing the satellites used for global Earth observations are typically science driven. The innovation of GEO is the call for investments by space agencies to be driven by global societal needs. This paper presents the preliminary findings of an analysis focused on the observational requirements of the GEO Energy SBA. The analysis was performed by the Committee on Earth Observation Satellites (CEOS) Systems Engineering Office (SEO) which is responsible for facilitating the development of implementation plans that have the maximum potential for success while optimizing the benefit to society. The analysis utilizes a new taxonomy for organizing requirements, assesses the current gaps in spacebased measurements and missions, assesses the impact of the current and planned space-based missions, and presents a set of recommendations.

  13. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  14. Overcoming obstacles in the implementation of factorial design for assay optimization.

    PubMed

    Shaw, Robert; Fitzek, Martina; Mouchet, Elizabeth; Walker, Graeme; Jarvis, Philip

    2015-03-01

    Factorial experimental design (FED) is a powerful approach for efficient optimization of robust in vitro assays-it enables cost and time savings while also improving the quality of assays. Although it is a well-known technique, there can be considerable barriers to overcome to fully exploit it within an industrial or academic organization. The article describes a tactical roll out of FED to a scientist group through: training which demystifies the technical components and concentrates on principles and examples; a user-friendly Excel-based tool for deconvoluting plate data; output which focuses on graphical display of data over complex statistics. The use of FED historically has generally been in conjunction with automated technology; however we have demonstrated a much broader impact of FED on the assay development process. The standardized approaches we have rolled out have helped to integrate FED as a fundamental part of assay development best practice because it can be used independently of the automation and vendor-supplied software. The techniques are applicable to different types of assay, both enzyme and cell, and can be used flexibly in manual and automated processes. This article describes the application of FED for a cellular assay. The challenges of selling FED concepts and rolling out to a wide bioscience community together with recommendations for good working practices and effective implementation are discussed. The accessible nature of these approaches means FED can be used by industrial as well as academic users.

  15. Optimizing Societal Benefit using a Systems Engineering Approach for Implementation of the GEOSS Space Segment

    NASA Technical Reports Server (NTRS)

    Killough, Brian D., Jr.; Sandford, Stephen P.; Cecil, L DeWayne; Stover, Shelley; Keith, Kim

    2008-01-01

    The Group on Earth Observations (GEO) is driving a paradigm shift in the Earth Observation community, refocusing Earth observing systems on GEO Societal Benefit Areas (SBA). Over the short history of space-based Earth observing systems most decisions have been made based on improving our scientific understanding of the Earth with the implicit assumption that this would serve society well in the long run. The space agencies responsible for developing the satellites used for global Earth observations are typically science driven. The innovation of GEO is the call for investments by space agencies to be driven by global societal needs. This paper presents the preliminary findings of an analysis focused on the observational requirements of the GEO Energy SBA. The analysis was performed by the Committee on Earth Observation Satellites (CEOS) Systems Engineering Office (SEO) which is responsible for facilitating the development of implementation plans that have the maximum potential for success while optimizing the benefit to society. The analysis utilizes a new taxonomy for organizing requirements, assesses the current gaps in spacebased measurements and missions, assesses the impact of the current and planned space-based missions, and presents a set of recommendations.

  16. Optimizing revenue cycle performance before, during, and after an EHR implementation.

    PubMed

    Schuler, Margaret; Berkebile, Jane; Vallozzi, Amanda

    2016-06-01

    An electronic health record implementation brings risks of adverse revenue cycle activity. Hospitals and health systems can mitigate that risk by taking aproactive, three-phase approach: Identify potential issues prior to implementation. Create teams to oversee operations during implementation. Hold regular meetings after implementation to ensure the system is running smoothly.

  17. Model compilation for embedded real-time planning and diagnosis

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2004-01-01

    This paper describes MEXEC, an implemented micro executive that compiles a device model into an interal structure. Not only does this structure facilitate computing the most likely current device mode from n sets of sensor measurements, but it also facilitates generating an n step reconfiguration plan that is most likely not to result in reaching a target mode - if such a plan exists.

  18. Model compilation for embedded real-time planning and diagnosis

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2004-01-01

    This paper describes MEXEC, an implemented micro executive that compiles a device model into an interal structure. Not only does this structure facilitate computing the most likely current device mode from n sets of sensor measurements, but it also facilitates generating an n step reconfiguration plan that is most likely not to result in reaching a target mode - if such a plan exists.

  19. Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial: Design, rationale and implementation

    PubMed Central

    Baraniuk, Sarah; Tilley, Barbara C.; del Junco, Deborah J.; Fox, Erin E.; van Belle, Gerald; Wade, Charles E.; Podbielski, Jeanette M.; Beeler, Angela M.; Hess, John R.; Bulger, Eileen M.; Schreiber, Martin A.; Inaba, Kenji; Fabian, Timothy C.; Kerby, Jeffrey D.; Cohen, Mitchell J.; Miller, Christopher N.; Rizoli, Sandro; Scalea, Thomas M.; O’Keeffe, Terence; Brasel, Karen J.; Cotton, Bryan A.; Muskat, Peter; Holcomb, John B.

    2014-01-01

    Background Forty percent of in-hospital deaths among injured patients involve massive truncal hemorrhage. These deaths may be prevented with rapid hemorrhage control and improved resuscitation techniques. The Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial was designed to determine if there is a difference in mortality between subjects who received different ratios of FDA approved blood products. This report describes the design and implementation of PROPPR. Study Design PROPPR was designed as a randomized, two-group, Phase III trial conducted in subjects with the highest level of trauma activation and predicted to have a massive transfusion. Subjects at 12 North American level 1 trauma centers were randomized into one of two standard transfusion ratio interventions: 1:1:1 or 1:1:2, (plasma, platelets, and red blood cells). Clinical data and serial blood samples were collected under Exception from Informed Consent (EFIC) regulations. Co-primary mortality endpoints of 24 hours and 30 days were evaluated. Results Between August 2012 and December 2013, 680 patients were randomized. The overall median time from admission to randomization was 26 minutes. PROPPR enrolled at higher than expected rates with fewer than expected protocol deviations. Conclusion PROPPR is the largest randomized study to enroll severely bleeding patients. This study showed that rapidly enrolling and successfully providing randomized blood products to severely injured patients in an EFIC study is feasible. PROPPR was able to achieve these goals by utilizing a collaborative structure and developing successful procedures and design elements that can be part of future trauma studies. PMID:24996573

  20. Optimization on fixed low latency implementation of the GBT core in FPGA

    NASA Astrophysics Data System (ADS)

    Chen, K.; Chen, H.; Wu, W.; Xu, H.; Yao, L.

    2017-07-01

    In the upgrade of ATLAS experiment [1], the front-end electronics components are subjected to a large radiation background. Meanwhile high speed optical links are required for the data transmission between the on-detector and off-detector electronics. The GBT architecture and the Versatile Link (VL) project are designed by CERN to support the 4.8 Gbps line rate bidirectional high-speed data transmission which is called GBT link [2]. In the ATLAS upgrade, besides the link with on-detector, the GBT link is also used between different off-detector systems. The GBTX ASIC is designed for the on-detector front-end, correspondingly for the off-detector electronics, the GBT architecture is implemented in Field Programmable Gate Arrays (FPGA). CERN launches the GBT-FPGA project to provide examples in different types of FPGA [3]. In the ATLAS upgrade framework, the Front-End LInk eXchange (FELIX) system [4, 5] is used to interface the front-end electronics of several ATLAS subsystems. The GBT link is used between them, to transfer the detector data and the timing, trigger, control and monitoring information. The trigger signal distributed in the down-link from FELIX to the front-end requires a fixed and low latency. In this paper, several optimizations on the GBT-FPGA IP core are introduced, to achieve a lower fixed latency. For FELIX, a common firmware will be used to interface different front-ends with support of both GBT modes: the forward error correction mode and the wide mode. The modified GBT-FPGA core has the ability to switch between the GBT modes without FPGA reprogramming. The system clock distribution of the multi-channel FELIX firmware is also discussed in this paper.

  1. Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial: design, rationale and implementation.

    PubMed

    Baraniuk, Sarah; Tilley, Barbara C; del Junco, Deborah J; Fox, Erin E; van Belle, Gerald; Wade, Charles E; Podbielski, Jeanette M; Beeler, Angela M; Hess, John R; Bulger, Eileen M; Schreiber, Martin A; Inaba, Kenji; Fabian, Timothy C; Kerby, Jeffrey D; Cohen, Mitchell Jay; Miller, Christopher N; Rizoli, Sandro; Scalea, Thomas M; O'Keeffe, Terence; Brasel, Karen J; Cotton, Bryan A; Muskat, Peter; Holcomb, John B

    2014-09-01

    Forty percent of in-hospital deaths among injured patients involve massive truncal haemorrhage. These deaths may be prevented with rapid haemorrhage control and improved resuscitation techniques. The Pragmatic Randomized Optimal Platelet and Plasma Ratios (PROPPR) Trial was designed to determine if there is a difference in mortality between subjects who received different ratios of FDA approved blood products. This report describes the design and implementation of PROPPR. PROPPR was designed as a randomized, two-group, Phase III trial conducted in subjects with the highest level of trauma activation and predicted to have a massive transfusion. Subjects at 12 North American level 1 trauma centres were randomized into one of two standard transfusion ratio interventions: 1:1:1 or 1:1:2, (plasma, platelets, and red blood cells). Clinical data and serial blood samples were collected under Exception from Informed Consent (EFIC) regulations. Co-primary mortality endpoints of 24h and 30 days were evaluated. Between August 2012 and December 2013, 680 patients were randomized. The overall median time from admission to randomization was 26min. PROPPR enrolled at higher than expected rates with fewer than expected protocol deviations. PROPPR is the largest randomized study to enrol severely bleeding patients. This study showed that rapidly enrolling and successfully providing randomized blood products to severely injured patients in an EFIC study is feasible. PROPPR was able to achieve these goals by utilizing a collaborative structure and developing successful procedures and design elements that can be part of future trauma studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Optimizing the Physical Implementation of an Eddy-covariance System to Minimize Flow Distortion

    NASA Astrophysics Data System (ADS)

    Durden, D.; Zulueta, R. C.; Durden, N. P.; Metzger, S.; Luo, H.; Duvall, B.

    2015-12-01

    The eddy-covariance technique is widely applied to observe the exchange of energy and scalars between the earth's surface and its atmosphere. In practice, fast (≥10 Hz) sonic anemometry and enclosed infrared gas spectroscopy are used to determine fluctuations in the 3-D wind vector and trace gas concentrations, respectively. Here, two contradicting requirements need to be fulfilled: (i) the sonic anemometer and trace gas analyzer should sample the same air volume, while (ii) the presence of the gas analyzer should not affect the wind field measured by the 3-D sonic anemometer. To determine the optimal positioning of these instruments with respect to each other, a trade-off study was performed. Theoretical formulations were used to determine a range of positions between the sonic anemometer and the gas analyzer that minimize the sum of (i) decorrelation error and (ii) wind blocking error. Subsequently, the blocking error induced by the presence of the gas sampling system was experimentally tested for a range of wind directions to verify the model-predicted placement: In a controlled environment the sonic anemometer was placed in the directed flow from a fan outfitted with a large shroud, with and without the presence of the enclosed gas analyzer and its sampling system. Blocking errors were enhanced by up to 10% for wind directions deviating ≥130° from frontal, when the flow was coming from the side where the enclosed gas analyzer was mounted. Consequently, we suggest a lateral position of the enclosed gas analyzer towards the aerodynamic wake of the tower, as data from this direction is likely affected by tower-induced flow distortion already. Ultimately, this physical implementation of the sonic anemometer and enclosed gas analyzer resulted in decorrelation and blocking errors ≤5% for ≥70% of all wind directions. These findings informed the design of the National Ecological Observatory Network's (NEON) eddy-covariance system, which is currently being

  3. Branch recovery with compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Chen, S.-K.; Li, C.-C.; Fuchs, W. K.; Hwu, W.-M.

    1992-01-01

    In processing systems where rapid recovery from transient faults is important, schemes for multiple instruction rollback recovery may be appropriate. Multiple instruction retry has been implemented in hardware by researchers and also in mainframe computers. This paper extends compiler-assisted instruction retry to a broad class of code execution failures. Five benchmarks were used to measure the performance penalty of hazard resolution. Results indicate that the enhanced pure software approach can produce performance penalties consistent with existing hardware techniques. A combined compiler/hardware resolution strategy is also described and evaluated. Experimental results indicate a lower performance penalty than with either a totally hardware or totally software approach.

  4. Developing an Onboard Traffic-Aware Flight Optimization Capability for Near-Term Low-Cost Implementation

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Ballin, Mark G.; Koczo, Stefan, Jr.; Vivona, Robert A.; Henderson, Jeffrey M.

    2013-01-01

    The concept of Traffic Aware Strategic Aircrew Requests (TASAR) combines Automatic Dependent Surveillance Broadcast (ADS-B) IN and airborne automation to enable user-optimal in-flight trajectory replanning and to increase the likelihood of Air Traffic Control (ATC) approval for the resulting trajectory change request. TASAR is designed as a near-term application to improve flight efficiency or other user-desired attributes of the flight while not impacting and potentially benefiting ATC. Previous work has indicated the potential for significant benefits for each TASAR-equipped aircraft. This paper will discuss the approach to minimizing TASAR's cost for implementation and accelerating readiness for near-term implementation.

  5. Utilizing object-oriented design to build advanced optimization strategies with generic implementation

    SciTech Connect

    Eldred, M.S.; Hart, W.E.; Bohnhoff, W.J.; Romero, V.J.; Hutchinson, S.A.; Salinger, A.G.

    1996-08-01

    the benefits of applying optimization to computational models are well known, but their range of widespread application to date has been limited. This effort attempts to extend the disciplinary areas to which optimization algorithms may be readily applied through the development and application of advanced optimization strategies capable of handling the computational difficulties associated with complex simulation codes. Towards this goal, a flexible software framework is under continued development for the application of optimization techniques to broad classes of engineering applications, including those with high computational expense and nonsmooth, nonconvex design space features. Object-oriented software design with C++ has been employed as a tool in providing a flexible, extensible, and robust multidisciplinary toolkit with computationally intensive simulations. In this paper, demonstrations of advanced optimization strategies using the software are presented in the hybridization and parallel processing research areas. Performance of the advanced strategies is compared with a benchmark nonlinear programming optimization.

  6. Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Alewine, Neal Jon

    1993-01-01

    Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.

  7. The Katydid system for compiling KEE applications to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    Components of a system known as Katydid are developed in an effort to compile knowledge-based systems developed in a multimechanism integrated environment (KEE) to Ada. The Katydid core is an Ada library supporting KEE object functionality, and the other elements include a rule compiler, a LISP-to-Ada translator, and a knowledge-base dumper. Katydid employs translation mechanisms that convert LISP knowledge structures and rules to Ada and utilizes basic prototypes of a run-time KEE object-structure library module for Ada. Preliminary results include the semiautomatic compilation of portions of a simple expert system to run in an Ada environment with the described algorithms. It is suggested that Ada can be employed for AI programming and implementation, and the Katydid system is being developed to include concurrency and synchronization mechanisms.

  8. Approximate knowledge compilation: The first order case

    SciTech Connect

    Val, A. del

    1996-12-31

    Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.

  9. Distributed memory compiler design for sparse problems

    NASA Technical Reports Server (NTRS)

    Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema

    1991-01-01

    A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.

  10. Implementation of reactive and predictive real-time control strategies to optimize dry stormwater detention ponds

    NASA Astrophysics Data System (ADS)

    Gaborit, Étienne; Anctil, François; Vanrolleghem, Peter A.; Pelletier, Geneviève

    2013-04-01

    Dry detention ponds have been widely implemented in U.S.A (National Research Council, 1993) and Canada (Shammaa et al. 2002) to mitigate the impacts of urban runoff on receiving water bodies. The aim of such structures is to allow a temporary retention of the water during rainfall events, decreasing runoff velocities and volumes (by infiltration in the pond) as well as providing some water quality improvement from sedimentation. The management of dry detention ponds currently relies on static control through a fixed pre-designed limitation of their maximum outflow (Middleton and Barrett 2008), for example via a proper choice of their outlet pipe diameter. Because these ponds are designed for large storms, typically 1- or 2-hour duration rainfall events with return periods comprised between 5 and 100 years, one of their main drawbacks is that they generally offer almost no retention for smaller rainfall events (Middleton and Barrett 2008), which are by definition much more common. Real-Time Control (RTC) has a high potential for optimizing retention time (Marsalek 2005) because it allows adopting operating strategies that are flexible and hence more suitable to the prevailing fluctuating conditions than static control. For dry ponds, this would basically imply adapting the outlet opening percentage to maximize water retention time, while being able to open it completely for severe storms. This study developed several enhanced RTC scenarios of a dry detention pond located at the outlet of a small urban catchment near Québec City, Canada, following the previous work of Muschalla et al. (2009). The catchment's runoff quantity and TSS concentration were simulated by a SWMM5 model with an improved wash-off formulation. The control procedures rely on rainfall detection and measures of the pond's water height for the reactive schemes, and on rainfall forecasts in addition to these variables for the predictive schemes. The automatic reactive control schemes implemented

  11. Lower bound of optimization in radiological protection system taking account of practical implementation of clearance

    SciTech Connect

    Hattori, Takatoshi

    2007-07-01

    The dose criterion used to derive clearance and exemption levels is of the order of 0.01 mSv/y based on the Basic Safety Standard (BSS) of the International Atomic Energy Agency (IAEA), the use of which has been agreed upon by many countries. It is important for human beings, who are facing the fact that global resources for risk reduction are limited, to carefully consider the practical implementation of radiological protection systems, particularly for low-radiation-dose regions. For example, in direct gamma ray monitoring, to achieve clearance level compliance, difficult issues on how the uncertainty (error) of gamma measurement should be handled and also how the uncertainty (scattering) of the estimation of non-gamma emitters should be treated in clearance must be resolved. To resolve these issues, a new probabilistic approach has been proposed to establish an appropriate safety factor for compliance with the clearance level in Japan. This approach is based on the fundamental concept that 0.1 mSv/y should be complied with the 97.5. percentile of the probability distribution for the uncertainties of both the measurement and estimation of non-gamma emitters. The International Commission on Radiological Protection, ICRP published a new concept of the representative person in Publication 101 Part I. The representative person is a hypothetical person exposed to a dose that is representative of those of highly exposed persons in a population. In a probabilistic dose assessment, the ICRP recommends that the representative person should be defined such that the probability of exposure occurrence is lower than about 5% that of a person randomly selected from the population receiving a high dose. From the new concept of the ICRP, it is reasonable to consider that the 95. percentile of the dose distribution for the representative person is theoretically always lower than the dose constraint. Using this established relationship, it can be concluded that the minimum dose

  12. Linear optical implementation of ancilla-free 1{yields}3 optimal phase covariant quantum cloning machines for the equatorial qubits

    SciTech Connect

    Zou Xubo; Mathis, W.

    2005-08-15

    We propose experimental schemes to implement ancilla-free 1{yields}3 optimal phase covariant quantum cloning machines for x-y and x-z equatorial qubits by interfering a polarized photon, which we wish to clone, with different light resources at a six-port symmetric beam splitter. The scheme requires linear optical elements and three-photon coincidence detection, and is feasible with current experimental technology.

  13. An Implementation of a Mathematical Programming Approach to Optimal Enrollments. AIR 2001 Annual Forum Paper.

    ERIC Educational Resources Information Center

    DePaolo, Concetta A.

    This paper explores the application of a mathematical optimization model to the problem of optimal enrollments. The general model, which can be applied to any institution, seeks to enroll the "best" class of students (as defined by the institution) subject to constraints imposed on the institution (e.g., capacity, quality). Topics…

  14. Retargeting of existing FORTRAN program and development of parallel compilers

    NASA Technical Reports Server (NTRS)

    Agrawal, Dharma P.

    1988-01-01

    The software models used in implementing the parallelizing compiler for the B-HIVE multiprocessor system are described. The various models and strategies used in the compiler development are: flexible granularity model, which allows a compromise between two extreme granularity models; communication model, which is capable of precisely describing the interprocessor communication timings and patterns; loop type detection strategy, which identifies different types of loops; critical path with coloring scheme, which is a versatile scheduling strategy for any multicomputer with some associated communication costs; and loop allocation strategy, which realizes optimum overlapped operations between computation and communication of the system. Using these models, several sample routines of the AIR3D package are examined and tested. It may be noted that automatically generated codes are highly parallelized to provide the maximized degree of parallelism, obtaining the speedup up to a 28 to 32-processor system. A comparison of parallel codes for both the existing and proposed communication model, is performed and the corresponding expected speedup factors are obtained. The experimentation shows that the B-HIVE compiler produces more efficient codes than existing techniques. Work is progressing well in completing the final phase of the compiler. Numerous enhancements are needed to improve the capabilities of the parallelizing compiler.

  15. Optimal speech codec implementation on ARM9E (v5E architecture) RISC processor for next-generation mobile multimedia

    NASA Astrophysics Data System (ADS)

    Bangla, Ajay Kumar; Vinay, M. K.; Suresh Babu, P. V.

    2004-01-01

    The mobile phone is undergoing a rapid evolution from a voice and limited text-messaging device to a complete multimedia client. RISC processors are predominantly used in these devices due to low cost, time to market and power consumption. The growing demand for signal processing performance on these platforms has triggered a convergence of RISC, CISC and DSP technologies on to a single core/system. This convergence leads to a multitude of challenges for optimal usage of available processing power. Voice codecs, which have been traditionally implemented on DSP platforms, have been adapted to sole RISC platforms as well. In this paper, the issues involved in optimizing a standard vocoder for RISC-DSP convergence platform (DSP enhanced RISC platforms) are addressed. Our optimization techniques are based on identification of algorithms, which could exploit either the DSP features or the RISC features or both. A few algorithmic modifications have also been suggested. By a systematic application of these optimization techniques for a GSM-AMR (NB) codec on ARM9E core, we could achieve more than 77% improvement over the baseline codec and almost 33% over that optimized for a RISC platform (ARM9T) alone in terms of processing cycle requirements. The optimization techniques outlined are generic in nature and are applicable to other vocoders on similar 'application-platform" combinations.

  16. Optimal implementation of green infrastructure practices to reduce adverse impacts of urban areas on hydrology and water quality

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Collingsworth, P.; Pijanowski, B. C.; Engel, B.

    2016-12-01

    Nutrient loading from Maumee River watershed is a significant reason for the harmful algal blooms (HABs) problem in Lake Erie. Although studies have explored strategies to reduce nutrient loading from agricultural areas in the Maumee River watershed, the nutrient loading in urban areas also needs to be reduced. Green infrastructure practices are popular approaches for stormwater management and useful for improving hydrology and water quality. In this study, the Long-Term Hydrologic Impact Assessment-Low Impact Development 2.1 (L-THIA-LID 2.1) model was used to determine how different strategies for implementing green infrastructure practices can be optimized to reduce impacts on hydrology and water quality in an urban watershed in the upper Maumee River system. Community inputs, such as the types of green infrastructure practices of greatest interest and environmental concerns for the community, were also considered during the study. Based on community input, the following environmental concerns were considered: runoff volume, Total Suspended Solids (TSS), Total Phosphorous (TP), Total Kjeldahl Nitrogen (TKN), and Nitrate+Nitrite (NOx); green infrastructure practices of interest included rain barrel, cistern, green roof, permeable patio, porous pavement, grassed swale, bioretention system, grass strip, wetland channel, detention basin, retention pond, and wetland basin. Spatial optimization of green infrastructure practice implementation was conducted to maximize environmental benefits while minimizing the cost of implementation. The green infrastructure practice optimization results can be used by the community to solve hydrology and water quality problems.

  17. Development and Implementation of an Optimization Model for Hydropower and Total Dissolved Gas in the Mid-Columbia River System

    DOE PAGES

    Witt, Adam; Magee, Timothy; Stewart, Kevin; ...

    2017-08-10

    Managing energy, water, and environmental priorities and constraints within a cascade hydropower system is a challenging multiobjective optimization effort that requires advanced modeling and forecasting tools. Within the mid-Columbia River system, there is currently a lack of specific solutions for predicting how coordinated operational decisions can mitigate the impacts of total dissolved gas (TDG) supersaturation while satisfying multiple additional policy and hydropower generation objectives. In this study, a reduced-order TDG uptake equation is developed that predicts tailrace TDG at seven hydropower facilities on the mid-Columbia River. The equation is incorporated into a general multiobjective river, reservoir, and hydropower optimization toolmore » as a prioritized operating goal within a broader set of system-level objectives and constraints. A test case is presented to assess the response of TDG and hydropower generation when TDG supersaturation is optimized to remain under state water-quality standards. Satisfaction of TDG as an operating goal is highly dependent on whether constraints that limit TDG uptake are implemented at a higher priority than generation requests. According to the model, an opportunity exists to reduce TDG supersaturation and meet hydropower generation requirements by shifting spillway flows to different time periods. In conclusion, a coordinated effort between all project owners is required to implement systemwide optimized solutions that satisfy the operating policies of all stakeholders.« less

  18. Parallel in-vitro and in-vivo techniques for optimizing cellular microenvironments by implementing biochemical, biomechanical and electromagnetic stimulations.

    PubMed

    Shamloo, Amir; Heibatollahi, Motahare; Ghafar-Zadeh, Ebrahim

    2012-01-01

    Development of novel engineering techniques that can promote new clinical treatments requires implementing multidisciplinary in-vitro and in-vivo approaches. In this study, we have implemented microfluidic devices and in-vivo rat model to study the mechanism of neural stem cell migration and differentiation. These studies can result in the treatment of damages to the neuronal system. In this research, we have shown that by applying appropriate ranges of biochemical and biomechanical factors as well as by exposing the cells to electromagnetic fields, it is possible to improve viability, proliferation, directional migration and differentiation of neural stem cells. The results of this study can be implemented in the design of optimized platforms that can be transplanted into the damaged areas of the neuronal system.

  19. Implementation of hybrid optimization for time domain electromagnetic 1D inversion

    NASA Astrophysics Data System (ADS)

    Yogi, Ida Bagus Suananda; Widodo

    2017-07-01

    Time domain electromagnetic (TDEM) is a non-invasive geophysical method. This method is an active method that makes use of the electromagnetic wave properties so that the conductivity of the lithology in the subsurface can be derived. Inversion of TDEM data was usually calculated using derivatives of least square method. These methods have drawbacks, such as the results depend on good starting models, and the final results are sometimes stuck in a local minimum, instead of global minimum. These drawbacks can be overcome by using global optimization approach, such as Monte Carlo, simulated annealing, and genetic algorithm. However, these global optimization methods need long calculation time. Because of that, this research tries to combine these two approaches as a hybrid optimization method. The hybrid optimization method is combination of conjugate gradient (CG) method and very fast-simulated reannealing (VFSA) method. The algorithm was applied to invert several synthetic models of TDEM data. Noise was added to the synthetic data to test the capabilities of hybrid optimization algorithm. This combination method was the most efficient method when compared to the conjugate gradient or simulated annealing method separately. Levenberg-Marquardt and genetic algorithm inversion results of Volvi Basins TDEM data were compared with the hybrid optimization algorithm results. The hybrid optimization results were better than Levenberg-Marquardt results, and when compared to the genetic algorithm processes, the calculation times were faster.

  20. Optimal Full Information Synthesis for Flexible Structures Implemented on Cray Supercomputers

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Balas, Gary J.

    1995-01-01

    This paper considers an algorithm for synthesis of optimal controllers for full information feedback. The synthesis procedure reduces to a single linear matrix inequality which may be solved via established convex optimization algorithms. The computational cost of the optimization is investigated. It is demonstrated the problem dimension and corresponding matrices can become large for practical engineering problems. This algorithm represents a process that is impractical for standard workstations for large order systems. A flexible structure is presented as a design example. Control synthesis requires several days on a workstation but may be solved in a reasonable amount of time using a Cray supercomputer.

  1. Descriptional Composition of Compiler Components

    DTIC Science & Technology

    1996-09-01

    matching [47]. A more declarative description may permit widely di erent execution models . For example, the same description could be implemented both as a...object is described in a single place rather than requiring two speci cations. In practice, however, the implementation model used by Prolog often makes...possible to directly model the attribution of trees in such languages. In fact, it is impossible to distinguish trees from directed acyclic graphs. This

  2. Rooted-tree network for optimal non-local gate implementation

    NASA Astrophysics Data System (ADS)

    Vyas, Nilesh; Saha, Debashis; Panigrahi, Prasanta K.

    2016-09-01

    A general quantum network for implementing non-local control-unitary gates, between remote parties at minimal entanglement cost, is shown to be a rooted-tree structure. Starting from a five-party scenario, we demonstrate the local implementation of simultaneous class of control-unitary(Hermitian) and multiparty control-unitary gates in an arbitrary n-party network. Previously, established networks are turned out to be special cases of this general construct.

  3. Stepwise optimization approach for improving LC-MS/MS analysis of zwitterionic antiepileptic drugs with implementation of experimental design.

    PubMed

    Kostić, Nađa; Dotsikas, Yannis; Malenović, Anđelija; Jančić Stojanović, Biljana; Rakić, Tijana; Ivanović, Darko; Medenica, Mirjana

    2013-07-01

    In this article, a step-by-step optimization procedure for improving analyte response with implementation of experimental design is described. Zwitterionic antiepileptics, namely vigabatrin, pregabalin and gabapentin, were chosen as model compounds to undergo chloroformate-mediated derivatization followed by liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) analysis. Application of a planned stepwise optimization procedure allowed responses of analytes, expressed as areas and signal-to-noise ratios, to be improved, enabling achievement of lower limit of detection values. Results from the current study demonstrate that optimization of parameters such as scan time, geometry of ion source, sheath and auxiliary gas pressure, capillary temperature, collision pressure and mobile phase composition can have a positive impact on sensitivity of LC-MS/MS methods. Optimization of LC and MS parameters led to a total increment of 53.9%, 83.3% and 95.7% in areas of derivatized vigabatrin, pregabalin and gabapentin, respectively, while for signal-to-noise values, an improvement of 140.0%, 93.6% and 124.0% was achieved, compared to autotune settings. After defining the final optimal conditions, a time-segmented method was validated for the determination of mentioned drugs in plasma. The method proved to be accurate and precise with excellent linearity for the tested concentration range (40.0 ng ml(-1)-10.0 × 10(3)  ng ml(-1)).

  4. A survey of compiler development aids. [concerning lexical, syntax, and semantic analysis

    NASA Technical Reports Server (NTRS)

    Buckles, B. P.; Hodges, B. C.; Hsia, P.

    1977-01-01

    A theoretical background was established for the compilation process by dividing it into five phases and explaining the concepts and algorithms that underpin each. The five selected phases were lexical analysis, syntax analysis, semantic analysis, optimization, and code generation. Graph theoretical optimization techniques were presented, and approaches to code generation were described for both one-pass and multipass compilation environments. Following the initial tutorial sections, more than 20 tools that were developed to aid in the process of writing compilers were surveyed. Eight of the more recent compiler development aids were selected for special attention - SIMCMP/STAGE2, LANG-PAK, COGENT, XPL, AED, CWIC, LIS, and JOCIT. The impact of compiler development aids were assessed some of their shortcomings and some of the areas of research currently in progress were inspected.

  5. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  6. Optimizing Blocking and Nonblocking Reduction Operations for Multicore Systems: Hierarchical Design and Implementation

    SciTech Connect

    Gorentla Venkata, Manjunath; Shamis, Pavel; Graham, Richard L; Ladd, Joshua S; Sampath, Rahul S

    2013-01-01

    Many scientific simulations, using the Message Passing Interface (MPI) programming model, are sensitive to the performance and scalability of reduction collective operations such as MPI Allreduce and MPI Reduce. These operations are the most widely used abstractions to perform mathematical operations over all processes that are part of the simulation. In this work, we propose a hierarchical design to implement the reduction operations on multicore systems. This design aims to improve the efficiency of reductions by 1) tailoring the algorithms and customizing the implementations for various communication mechanisms in the system 2) providing the ability to configure the depth of hierarchy to match the system architecture, and 3) providing the ability to independently progress each of this hierarchy. Using this design, we implement MPI Allreduce and MPI Reduce operations (and its nonblocking variants MPI Iallreduce and MPI Ireduce) for all message sizes, and evaluate on multiple architectures including InfiniBand and Cray XT5. We leverage and enhance our existing infrastructure, Cheetah, which is a framework for implementing hierarchical collective operations to implement these reductions. The experimental results show that the Cheetah reduction operations outperform the production-grade MPI implementations such as Open MPI default, Cray MPI, and MVAPICH2, demonstrating its efficiency, flexibility and portability. On Infini- Band systems, with a microbenchmark, a 512-process Cheetah nonblocking Allreduce and Reduce achieves a speedup of 23x and 10x, respectively, compared to the default Open MPI reductions. The blocking variants of the reduction operations also show similar performance benefits. A 512-process nonblocking Cheetah Allreduce achieves a speedup of 3x, compared to the default MVAPICH2 Allreduce implementation. On a Cray XT5 system, a 6144-process Cheetah Allreduce outperforms the Cray MPI by 145%. The evaluation with an application kernel, Conjugate

  7. Compiler-assisted multiple instruction rollback recovery using a read buffer

    NASA Technical Reports Server (NTRS)

    Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.

    1995-01-01

    Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.

  8. Performance of Compiler-Assisted Memory Safety Checking

    DTIC Science & Technology

    2014-08-01

    versions of the Clang compiler. This second version of SAFECode was selected as one of the memory safety checkers examined in this study. Plum and Keaton...well as a static analysis meth- od to optimize away some of the checks [ Plum 2005]. The work presented in this technical note adapted part of the...to treat a bounds check in the called function as a pre- condition for that function (called a requirement by Plum and Keaton [ Plum 2005]), in this

  9. FPGA implementation of a stochastic neural network for monotonic pseudo-Boolean optimization.

    PubMed

    Grossi, Giuliano; Pedersini, Federico

    2008-08-01

    In this paper a FPGA implementation of a novel neural stochastic model for solving constrained NP-hard problems is proposed and developed. The model exploits pseudo-Boolean functions both to express the constraints and to define the cost function, interpreted as energy of a neural network. A wide variety of NP-hard problems falls in the class of problems that can be solved by this model, particularly those having a quadratic pseudo-Boolean penalty function. The proposed hardware implementation provides high computation speed by exploiting parallelism, as the neuron update and the constraint violation check can be performed in parallel over the whole network. The neural system has been tested on random and benchmark graphs, showing good performance with respect to the same heuristic for the same problems. Furthermore, the computational speed of the FPGA implementation has been measured and compared to software implementation. The developed architecture featured dramatically faster computation, with respect to the software implementation, even adopting a low-cost FPGA chip.

  10. Efficient implementation and application of the artificial bee colony algorithm to low-dimensional optimization problems

    NASA Astrophysics Data System (ADS)

    von Rudorff, Guido Falk; Wehmeyer, Christoph; Sebastiani, Daniel

    2014-06-01

    We adapt a swarm-intelligence-based optimization method (the artificial bee colony algorithm, ABC) to enhance its parallel scaling properties and to improve the escaping behavior from deep local minima. Specifically, we apply the approach to the geometry optimization of Lennard-Jones clusters. We illustrate the performance and the scaling properties of the parallelization scheme for several system sizes (5-20 particles). Our main findings are specific recommendations for ranges of the parameters of the ABC algorithm which yield maximal performance for Lennard-Jones clusters and Morse clusters. The suggested parameter ranges for these different interaction potentials turn out to be very similar; thus, we believe that our reported values are fairly general for the ABC algorithm applied to chemical optimization problems.

  11. Implementation of a Low-Thrust Trajectory Optimization Algorithm for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Sims, Jon A.; Finlayson, Paul A.; Rinderle, Edward A.; Vavrina, Matthew A.; Kowalkowski, Theresa D.

    2006-01-01

    A tool developed for the preliminary design of low-thrust trajectories is described. The trajectory is discretized into segments and a nonlinear programming method is used for optimization. The tool is easy to use, has robust convergence, and can handle many intermediate encounters. In addition, the tool has a wide variety of features, including several options for objective function and different low-thrust propulsion models (e.g., solar electric propulsion, nuclear electric propulsion, and solar sail). High-thrust, impulsive trajectories can also be optimized.

  12. Implementation of a Low-Thrust Trajectory Optimization Algorithm for Preliminary Design

    NASA Technical Reports Server (NTRS)

    Sims, Jon A.; Finlayson, Paul A.; Rinderle, Edward A.; Vavrina, Matthew A.; Kowalkowski, Theresa D.

    2006-01-01

    A tool developed for the preliminary design of low-thrust trajectories is described. The trajectory is discretized into segments and a nonlinear programming method is used for optimization. The tool is easy to use, has robust convergence, and can handle many intermediate encounters. In addition, the tool has a wide variety of features, including several options for objective function and different low-thrust propulsion models (e.g., solar electric propulsion, nuclear electric propulsion, and solar sail). High-thrust, impulsive trajectories can also be optimized.

  13. Ada Compiler Validation Summary Report. Certificate Number: 910626S1. 11178, U.S. Navy Ada/M, Version 4.0 (/OPTIMIZE) VAX 11/785 => AN/UYK-44 (EMR) (Bare Board).

    DTIC Science & Technology

    1991-07-30

    Number: 910626S1.11178 U.S. NAVY Ada/M, Version 4.0 (/OPTIMIZE) VAX 11/785 => AN/UYK-44 (EMR) (Bare Board) Prepared By: Software Standards Validation Group...Sysems Manager, Software Standards Engineering Division (ISED) Validation Group Computer Systems Laboratory (CLS) National Institute of Standards and...Technology Building 225, Room A266 Gaithersburg, MD 20899 Ada ion Organization Ada Joint Program Office 1 Dir ct Computer & Software Dr. John

  14. PDoublePop: An implementation of parallel genetic algorithm for function optimization

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Tzallas, Alexandros; Tsalikakis, Dimitris

    2016-12-01

    A software for the implementation of parallel genetic algorithms is presented in this article. The underlying genetic algorithm is aimed to locate the global minimum of a multidimensional function inside a rectangular hyperbox. The proposed software named PDoublePop implements a client-server model for parallel genetic algorithms with advanced features for the local genetic algorithms such as: an enhanced stopping rule, an advanced mutation scheme and periodical application of a local search procedure. The user may code the objective function either in C++ or in Fortran77. The method is tested on a series of well-known test functions and the results are reported.

  15. Design methodology for optimal hardware implementation of wavelet transform domain algorithms

    NASA Astrophysics Data System (ADS)

    Johnson-Bey, Charles; Mickens, Lisa P.

    2005-05-01

    The work presented in this paper lays the foundation for the development of an end-to-end system design methodology for implementing wavelet domain image/video processing algorithms in hardware using Xilinx field programmable gate arrays (FPGAs). With the integration of the Xilinx System Generator toolbox, this methodology will allow algorithm developers to design and implement their code using the familiar MATLAB/Simulink development environment. By using this methodology, algorithm developers will not be required to become proficient in the intricacies of hardware design, thus reducing the design cycle and time-to-market.

  16. Lean processes for optimizing OR capacity utilization: prospective analysis before and after implementation of value stream mapping (VSM).

    PubMed

    Schwarz, Patric; Pannes, Klaus Dieter; Nathan, Michel; Reimer, Hans Jorg; Kleespies, Axel; Kuhn, Nicole; Rupp, Anne; Zügel, Nikolaus Peter

    2011-10-01

    The decision to optimize the processes in the operating tract was based on two factors: competition among clinics and a desire to optimize the use of available resources. The aim of the project was to improve operating room (OR) capacity utilization by reduction of change and throughput time per patient. The study was conducted at Centre Hospitalier Emil Mayrisch Clinic for specialized care (n = 618 beds) Luxembourg (South). A prospective analysis was performed before and after the implementation of optimized processes. Value stream analysis and design (value stream mapping, VSM) were used as tools. VSM depicts patient throughput and the corresponding information flows. Furthermore it is used to identify process waste (e.g. time, human resources, materials, etc.). For this purpose, change times per patient (extubation of patient 1 until intubation of patient 2) and throughput times (inward transfer until outward transfer) were measured. VSM, change and throughput times for 48 patient flows (VSM A(1), actual state = initial situation) served as the starting point. Interdisciplinary development of an optimized VSM (VSM-O) was evaluated. Prospective analysis of 42 patients (VSM-A(2)) without and 75 patients (VSM-O) with an optimized process in place were conducted. The prospective analysis resulted in a mean change time of (mean ± SEM) VSM-A(2) 1,507 ± 100 s versus VSM-O 933 ± 66 s (p < 0.001). The mean throughput time VSM-A(2) (mean ± SEM) was 151 min (±8) versus VSM-O 120 min (±10) (p < 0.05). This corresponds to a 23% decrease in waiting time per patient in total. Efficient OR capacity utilization and the optimized use of human resources allowed an additional 1820 interventions to be carried out per year without any increase in human resources. In addition, perioperative patient monitoring was increased up to 100%.

  17. Exploration of Optimization Options for Increasing Performance of a GPU Implementation of a Three-dimensional Bilateral Filter

    SciTech Connect

    Bethel, E. Wes; Bethel, E. Wes

    2012-01-06

    This report explores using GPUs as a platform for performing high performance medical image data processing, specifically smoothing using a 3D bilateral filter, which performs anisotropic, edge-preserving smoothing. The algorithm consists of a running a specialized 3D convolution kernel over a source volume to produce an output volume. Overall, our objective is to understand what algorithmic design choices and configuration options lead to optimal performance of this algorithm on the GPU. We explore the performance impact of using different memory access patterns, of using different types of device/on-chip memories, of using strictly aligned and unaligned memory, and of varying the size/shape of thread blocks. Our results reveal optimal configuration parameters for our algorithm when executed sample 3D medical data set, and show performance gains ranging from 30x to over 200x as compared to a single-threaded CPU implementation.

  18. SVD-based optimal filtering for noise reduction in dual microphone hearing aids: a real time implementation and perceptual evaluation.

    PubMed

    Maj, Jean-Baptiste; Royackers, Liesbeth; Moonen, Marc; Wouters, Jan

    2005-09-01

    In this paper, the first real-time implementation and perceptual evaluation of a singular value decomposition (SVD)-based optimal filtering technique for noise reduction in a dual microphone behind-the-ear (BTE) hearing aid is presented. This evaluation was carried out for a speech weighted noise and multitalker babble, for single and multiple jammer sound source scenarios. Two basic microphone configurations in the hearing aid were used. The SVD-based optimal filtering technique was compared against an adaptive beamformer, which is known to give significant improvements in speech intelligibility in noisy environment. The optimal filtering technique works without assumptions about a speaker position, unlike the two-stage adaptive beamformer. However this strategy needs a robust voice activity detector (VAD). A method to improve the performance of the VAD was presented and evaluated physically. By connecting the VAD to the output of the noise reduction algorithms, a good discrimination between the speech-and-noise periods and the noise-only periods of the signals was obtained. The perceptual experiments demonstrated that the SVD-based optimal filtering technique could perform as well as the adaptive beamformer in a single noise source scenario, i.e., the ideal scenario for the latter technique, and could outperform the adaptive beamformer in multiple noise source scenarios.

  19. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  20. A real time, FEM based optimal control algorithm and its implementation using parallel processing hardware (transistors) in a microprocessor environment

    NASA Technical Reports Server (NTRS)

    Patten, William Neff

    1989-01-01

    There is an evident need to discover a means of establishing reliable, implementable controls for systems that are plagued by nonlinear and, or uncertain, model dynamics. The development of a generic controller design tool for tough-to-control systems is reported. The method utilizes a moving grid, time infinite element based solution of the necessary conditions that describe an optimal controller for a system. The technique produces a discrete feedback controller. Real time laboratory experiments are now being conducted to demonstrate the viability of the method. The algorithm that results is being implemented in a microprocessor environment. Critical computational tasks are accomplished using a low cost, on-board, multiprocessor (INMOS T800 Transputers) and parallel processing. Progress to date validates the methodology presented. Applications of the technique to the control of highly flexible robotic appendages are suggested.

  1. Optimizing Implementation of Obesity Prevention Programs: A Qualitative Investigation Within a Large-Scale Randomized Controlled Trial.

    PubMed

    Kozica, Samantha L; Teede, Helena J; Harrison, Cheryce L; Klein, Ruth; Lombard, Catherine B

    2016-01-01

    The prevalence of obesity in rural and remote areas is elevated in comparison to urban populations, highlighting the need for interventions targeting obesity prevention in these settings. Implementing evidence-based obesity prevention programs is challenging. This study aimed to investigate factors influencing the implementation of obesity prevention programs, including adoption, program delivery, community uptake, and continuation, specifically within rural settings. Nested within a large-scale randomized controlled trial, a qualitative exploratory approach was adopted, with purposive sampling techniques utilized, to recruit stakeholders from 41 small rural towns in Australia. In-depth semistructured interviews were conducted with clinical health professionals, health service managers, and local government employees. Open coding was completed independently by 2 investigators and thematic analysis undertaken. In-depth interviews revealed that obesity prevention programs were valued by the rural workforce. Program implementation is influenced by interrelated factors across: (1) contextual factors and (2) organizational capacity. Key recommendations to manage the challenges of implementing evidence-based programs focused on reducing program delivery costs, aided by the provision of a suite of implementation and evaluation resources. Informing the scale-up of future prevention programs, stakeholders highlighted the need to build local rural capacity through developing supportive university partnerships, generating local program ownership and promoting active feedback to all program partners. We demonstrate that the rural workforce places a high value on obesity prevention programs. Our results inform the future scale-up of obesity prevention programs, providing an improved understanding of strategies to optimize implementation of evidence-based prevention programs. © 2015 National Rural Health Association.

  2. Optimization of the choice of unmanned aerial vehicles used to monitor the implementation of selected construction projects

    NASA Astrophysics Data System (ADS)

    Skorupka, Dariusz; Duchaczek, Artur; Waniewska, Agnieszka; Kowacka, Magdalena

    2017-07-01

    Due to their properties unmanned aerial vehicles have huge number of possibilities for application in construction engineering. The nature and extent of construction works performedmakes the decision to purchase the right equipment significant for the possibility for its further use while monitoring the implementation of these works. Technical factors, such as the accuracy and quality of the applied measurement instruments are especially important when monitoring the realization of construction projects. The paper presents the optimization of the choice of unmanned aerial vehicles using the Bellinger method. The decision-making analysis takes into account criteria that are particularly crucial by virtue of the range of monitoring of ongoing construction works.

  3. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  4. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  5. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  6. 36 CFR 705.6 - Compilation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false Compilation. 705.6 Section 705.6 Parks, Forests, and Public Property LIBRARY OF CONGRESS REPRODUCTION, COMPILATION, AND DISTRIBUTION OF NEWS TRANSMISSIONS UNDER THE PROVISIONS OF THE AMERICAN TELEVISION AND RADIO ARCHIVES ACT §...

  7. 14 CFR § 1203.302 - Compilation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....302 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION INFORMATION SECURITY PROGRAM... unclassified may be classified if the compiled information reveals an additional association or relationship... individual items of information. As used in the Order, compilations mean an aggregate of pre-existing...

  8. Reformulating Constraints for Compilability and Efficiency

    NASA Technical Reports Server (NTRS)

    Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin

    1992-01-01

    KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.

  9. Static Semantics and Compiler Error Recovery

    DTIC Science & Technology

    1985-06-01

    C., "Yacc: Yet Another Compiler-Compiler," Computer Science Technical Report 32, Bell Laboratories, Murray Hill, N. J., 1975 [KR78] Kernighan ... Brian W. and Ritchie, Dennis M., The C Programming Language, Prentice-Hall, Englewood Cliffs, N. J., 1978 [Knu68] Knuth, Donald E., "Semantics of

  10. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices

    PubMed Central

    Marin, Leandro; Piotr Pawlowski, Marcin; Jara, Antonio

    2015-01-01

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol. PMID:26343677

  11. Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices.

    PubMed

    Marin, Leandro; Pawlowski, Marcin Piotr; Jara, Antonio

    2015-08-28

    The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol.

  12. Optimal parameters for clinical implementation of breast cancer patient setup using Varian DTS software.

    PubMed

    Ng, Sook Kien; Zygmanski, Piotr; Jeung, Andrew; Mostafavi, Hassan; Hesser, Juergen; Bellon, Jennifer R; Wong, Julia S; Lyatskaya, Yulia

    2012-05-10

    Digital tomosynthesis (DTS) was evaluated as an alternative to cone-beam computed tomography (CBCT) for patient setup. DTS is preferable when there are constraints with setup time, gantry-couch clearance, and imaging dose using CBCT. This study characterizes DTS data acquisition and registration parameters for the setup of breast cancer patients using nonclinical Varian DTS software. DTS images were reconstructed from CBCT projections acquired on phantoms and patients with surgical clips in the target volume. A shift-and-add algorithm was used for DTS volume reconstructions, while automated cross-correlation matches were performed within Varian DTS software. Triangulation on two short DTS arcs separated by various angular spread was done to improve 3D registration accuracy. Software performance was evaluated on two phantoms and ten breast cancer patients using the registration result as an accuracy measure; investigated parameters included arc lengths, arc orientations, angular separation between two arcs, reconstruction slice spacing, and number of arcs. The shifts determined from DTS-to-CT registration were compared to the shifts based on CBCT-to-CT registration. The difference between these shifts was used to evaluate the software accuracy. After findings were quantified, optimal parameters for the clinical use of DTS technique were determined. It was determined that at least two arcs were necessary for accurate 3D registration for patient setup. Registration accuracy of 2 mm was achieved when the reconstruction arc length was > 5° for clips with HU ≥ 1000; larger arc length (≥ 8°) was required for very low HU clips. An optimal arc separation was found to be ≥ 20° and optimal arc length was 10°. Registration accuracy did not depend on DTS slice spacing. DTS image reconstruction took 10-30 seconds and registration took less than 20 seconds. The performance of Varian DTS software was found suitable for the accurate setup of breast cancer patients

  13. Parallel Implementations of Gradient Based Iterative Algorithms for a Class of Discrete Optimal Control Problems.

    DTIC Science & Technology

    1987-02-28

    7, July 1975, pp. 701-717. "’." -%. [CHE78] Chen, S.C., Kuck, D.J. and Sameh , A.H., Practical Parallel Band Tri- angular System Solvers, ACM Trans. on...Polak, E., Computational Methods in Optimization. A Unified Approach, Academic Press, New York and London, 1971. [SAM77I Sameh , A.H., and Brent, R.P

  14. Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark

    SciTech Connect

    Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik; Deshpande, Anand M.; Straalen, Brian Van; Smelyanskiy, Mikhail; Almgren, Ann; Dubey, Pradeep; Shalf, John; Oliker, Leonid

    2012-12-01

    Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding, dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.

  15. Optimization of a hardware implementation for pulse coupled neural networks for image applications

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal

    2010-04-01

    Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.

  16. Lattice dynamical wavelet neural networks implemented using particle swarm optimization for spatio-temporal system identification.

    PubMed

    Wei, Hua-Liang; Billings, Stephen A; Zhao, Yifan; Guo, Lingzhong

    2009-01-01

    In this brief, by combining an efficient wavelet representation with a coupled map lattice model, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNNs), is introduced for spatio-temporal system identification. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimization (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the OPP algorithm, significant wavelet neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated wavelet neurons are optimized using a particle swarm optimizer. The resultant network model, obtained in the first stage, however, may be redundant. In the second stage, an orthogonal least squares algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet neurons from the network. An example for a real spatio-temporal system identification problem is presented to demonstrate the performance of the proposed new modeling framework.

  17. Design, decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-06-17

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  18. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H Lee; Ganti, Anand; Resnick, David R

    2014-11-18

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  19. Decoding and optimized implementation of SECDED codes over GF(q)

    DOEpatents

    Ward, H. Lee; Ganti, Anand; Resnick, David R

    2013-10-22

    A plurality of columns for a check matrix that implements a distance d linear error correcting code are populated by providing a set of vectors from which to populate the columns, and applying to the set of vectors a filter operation that reduces the set by eliminating therefrom all vectors that would, if used to populate the columns, prevent the check matrix from satisfying a column-wise linear independence requirement associated with check matrices of distance d linear codes. One of the vectors from the reduced set may then be selected to populate one of the columns. The filtering and selecting repeats iteratively until either all of the columns are populated or the number of currently unpopulated columns exceeds the number of vectors in the reduced set. Columns for the check matrix may be processed to reduce the amount of logic needed to implement the check matrix in circuit logic.

  20. Optimization of Series Expressions. Part 2. Overview of the Theory and Implementation

    DTIC Science & Technology

    1989-12-01

    collect-sum (choose-if #’ oddp firstS)) 9 Figure 4.1: Illustration of the Lisp implementation of unoptimized series. 20 A Common Lisp Implementation the...one at a time as needed. (catenate (make-series I (+ 2 2)) #Z(7 8)) =, Z(i 4 7 8) (collect-first (choose-if *’ oddp #Z(8 -7 6 -1))) = -7 (subseries...34) " ") = #Z(" " "fee" "fi" "fo") (choose #Z(T nil T) #Z(1 2 3)) =: #Z(1 3) (choose Z(nil 3 4 nil)) =* #Z(3 4) (positions (map-fn #’ oddp #Z(1 2 3 5 6 8

  1. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  2. The RHNumtS compilation: Features and bioinformatics approaches to locate and quantify Human NumtS

    PubMed Central

    Lascaro, Daniela; Castellana, Stefano; Gasparre, Giuseppe; Romeo, Giovanni; Saccone, Cecilia; Attimonelli, Marcella

    2008-01-01

    Background To a greater or lesser extent, eukaryotic nuclear genomes contain fragments of their mitochondrial genome counterpart, deriving from the random insertion of damaged mtDNA fragments. NumtS (Nuclear mt Sequences) are not equally abundant in all species, and are redundant and polymorphic in terms of copy number. In population and clinical genetics, it is important to have a complete overview of NumtS quantity and location. Searching PubMed for NumtS or Mitochondrial pseudo-genes yields hundreds of papers reporting Human NumtS compilations produced by in silico or wet-lab approaches. A comparison of published compilations clearly shows significant discrepancies among data, due both to unwise application of Bioinformatics methods and to a not yet correctly assembled nuclear genome. To optimize quantification and location of NumtS, we produced a consensus compilation of Human NumtS by applying various bioinformatics approaches. Results Location and quantification of NumtS may be achieved by applying database similarity searching methods: we have applied various methods such as Blastn, MegaBlast and BLAT, changing both parameters and database; the results were compared, further analysed and checked against the already published compilations, thus producing the Reference Human Numt Sequences (RHNumtS) compilation. The resulting NumtS total 190. Conclusion The RHNumtS compilation represents a highly reliable reference basis, which may allow designing a lab protocol to test the actual existence of each NumtS. Here we report preliminary results based on PCR amplification and sequencing on 41 NumtS selected from RHNumtS among those with lower score. In parallel, we are currently designing the RHNumtS database structure for implementation in the HmtDB resource. In the future, the same database will host NumtS compilations from other organisms, but these will be generated only when the nuclear genome of a specific organism has reached a high-quality level of assembly

  3. Optimization of ion exchange sigmoidal gradients using hybrid models: Implementation of quality by design in analytical method development.

    PubMed

    Joshi, Varsha S; Kumar, Vijesh; Rathore, Anurag S

    2017-03-31

    Thorough product understanding is one of the basic tenets for successful implementation of Quality by Design (QbD). Complexity encountered in analytical characterization of biotech therapeutics such as monoclonal antibodies (mAbs) requires novel, simpler, and generic approaches towards product characterization. This paper presents a methodology for implementation of QbD for analytical method development. Optimization of an analytical cation exchange high performance liquid chromatography (CEX-HPLC) method utilizing a sigmoidal gradient has been performed using a hybrid mechanistic model that is based on Design of experiment (DOE) based studies. Since sigmodal gradients are much more complex than the traditional linear gradients and have a large number of input parameters (five) for optimization, the number of DOE experiments required for a full factorial design to estimate all the main effects as well as the interactions would be too large (243). To address this problem, a mechanistic model was used to simulate the analytical separation for the DOE and then the results were used to build an empirical model. The mechanistic model used in this work is a more versatile general rate model in combination of modified Langmuir binding kinetics. The modified Langmuir model is capable of modelling the impact of nonlinear changes in the concentration of the salt modifier. Further, to get the input and output profiles of mAb and salts/buffers, the HPLC system, consisting of the mixer, detectors, and tubing was modelled as a sequence of dispersed plug flow reactors and continuous stirred tank reactors (CSTR). The experimental work was limited to calibration of the HPLC system and finding the model parameters through three linear gradients. To simplify the optimization process, only three peaks in the centre of the profile (main product and the adjacent acidic and basic variants) were chosen to determine the final operating condition. The regression model made from the DoE data

  4. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature.

  5. Optimal FPGA implementation of CL multiwavelets architecture for signal denoising application

    NASA Astrophysics Data System (ADS)

    Mohan Kumar, B.; Vidhya Lavanya, R.; Sumesh, E. P.

    2013-03-01

    Wavelet transform is considered one of the efficient transforms of this decade for real time signal processing. Due to implementation constraints scalar wavelets do not possess the properties such as compact support, regularity, orthogonality and symmetry, which are desirable qualities to provide a good signal to noise ratio (SNR) in case of signal denoising. This leads to the evolution of the new dimension of wavelet called 'multiwavelets', which possess more than one scaling and wavelet filters. The architecture implementation of multiwavelets is an emerging area of research. In real time, the signals are in scalar form, which demands the processing architecture to be scalar. But the conventional Donovan Geronimo Hardin Massopust (DGHM) and Chui-Lian (CL) multiwavelets are vectored and are also unbalanced. In this article, the vectored multiwavelet transforms are converted into a scalar form and its architecture is implemented in FPGA (Field Programmable Gate Array) for signal denoising application. The architecture is compared with DGHM multiwavelets architecture in terms of several objective and performance measures. The CL multiwavelets architecture is further optimised for best performance by using DSP48Es. The results show that CL multiwavelet architecture is suited better for the signal denoising application.

  6. Optimized OpenCL implementation of the Elastodynamic Finite Integration Technique for viscoelastic media

    NASA Astrophysics Data System (ADS)

    Molero-Armenta, M.; Iturrarán-Viveros, Ursula; Aparicio, S.; Hernández, M. G.

    2014-10-01

    Development of parallel codes that are both scalable and portable for different processor architectures is a challenging task. To overcome this limitation we investigate the acceleration of the Elastodynamic Finite Integration Technique (EFIT) to model 2-D wave propagation in viscoelastic media by using modern parallel computing devices (PCDs), such as multi-core CPUs (central processing units) and GPUs (graphics processing units). For that purpose we choose the industry open standard Open Computing Language (OpenCL) and an open-source toolkit called PyOpenCL. The implementation is platform independent and can be used on AMD or NVIDIA GPUs as well as classical multi-core CPUs. The code is based on the Kelvin-Voigt mechanical model which has the gain of not requiring additional field variables. OpenCL performance can be in principle, improved once one can eliminate global memory access latency by using local memory. Our main contribution is the implementation of local memory and an analysis of performance of the local versus the global memory using eight different computing devices (including Kepler, one of the fastest and most efficient high performance computing technology) with various operating systems. The full implementation of the code is included.

  7. Expected treatment dose construction and adaptive inverse planning optimization: Implementation for offline head and neck cancer adaptive radiotherapy

    SciTech Connect

    Yan Di; Liang Jian

    2013-02-15

    : Adaptive treatment modification can be implemented including the expected treatment dose in the adaptive inverse planning optimization. The retrospective evaluation results demonstrate that utilizing the weekly adaptive inverse planning optimization, the dose distribution of h and n cancer treatment can be largely improved.

  8. Optimized FPGA Implementation of Multi-Rate FIR Filters Through Thread Decomposition

    NASA Technical Reports Server (NTRS)

    Kobayashi, Kayla N.; He, Yutao; Zheng, Jason X.

    2011-01-01

    Multi-rate finite impulse response (MRFIR) filters are among the essential signal-processing components in spaceborne instruments where finite impulse response filters are often used to minimize nonlinear group delay and finite precision effects. Cascaded (multistage) designs of MRFIR filters are further used for large rate change ratio in order to lower the required throughput, while simultaneously achieving comparable or better performance than single-stage designs. Traditional representation and implementation of MRFIR employ polyphase decomposition of the original filter structure, whose main purpose is to compute only the needed output at the lowest possible sampling rate. In this innovation, an alternative representation and implementation technique called TD-MRFIR (Thread Decomposition MRFIR) is presented. The basic idea is to decompose MRFIR into output computational threads, in contrast to a structural decomposition of the original filter as done in the polyphase decomposition. A naive implementation of a decimation filter consisting of a full FIR followed by a downsampling stage is very inefficient, as most of the computations performed by the FIR state are discarded through downsampling. In fact, only 1/M of the total computations are useful (M being the decimation factor). Polyphase decomposition provides an alternative view of decimation filters, where the downsampling occurs before the FIR stage, and the outputs are viewed as the sum of M sub-filters with length of N/M taps. Although this approach leads to more efficient filter designs, in general the implementation is not straightforward if the numbers of multipliers need to be minimized. In TD-MRFIR, each thread represents an instance of the finite convolution required to produce a single output of the MRFIR. The filter is thus viewed as a finite collection of concurrent threads. Each of the threads completes when a convolution result (filter output value) is computed, and activated when the first

  9. Using psychological theory to inform methods to optimize the implementation of a hand hygiene intervention.

    PubMed

    Boscart, Veronique M; Fernie, Geoff R; Lee, Jae H; Jaglal, Susan B

    2012-08-28

    Careful hand hygiene (HH) is the single most important factor in preventing the transmission of infections to patients, but compliance is difficult to achieve and maintain. A lack of understanding of the processes involved in changing staff behaviour may contribute to the failure to achieve success. The purpose of this study was to identify nurses' and administrators' perceived barriers and facilitators to current HH practices and the implementation of a new electronic monitoring technology for HH. Ten key informant interviews (three administrators and seven nurses) were conducted to explore barriers and facilitators related to HH and the impact of the new technology on outcomes. The semi structured interviews were based on the Theoretical Domains Framework by Michie et al. and conducted prior to intervention implementation. Data were explored using an inductive qualitative analysis approach. Data between administrators and nurses were compared. In 9 of the 12 domains, nurses and administrators differed in their responses. Administrators believed that nurses have insufficient knowledge and skills to perform HH, whereas the nurses were confident they had the required knowledge and skills. Nurses focused on immediate consequences, whereas administrators highlighted long-term outcomes of the system. Nurses concentrated foremost on their personal safety and their families' safety as a source of motivation to perform HH, whereas administrators identified professional commitment, incentives, and goal setting. Administrators stated that the staff do not have the decision processes in place to judge whether HH is necessary or not. They also highlighted the positive aspects of teams as a social influence, whereas nurses were not interested in group conformity or being compared to others. Nurses described the importance of individual feedback and self-monitoring in order to increase their performance, whereas administrators reported different views. This study highlights the

  10. Design and implementation of a delay-optimized universal programmable routing circuit for FPGAs

    NASA Astrophysics Data System (ADS)

    Fang, Wu; Huowen, Zhang; Jinmei, Lai; Yuan, Wang; Liguang, Chen; Lei, Duan; Jiarong, Tong

    2009-06-01

    This paper presents a universal field programmable gate array (FPGA) programmable routing circuit, focusing primarily on a delay optimization. Under the precondition of the routing resource's flexibility and routability, the number of programmable interconnect points (PIP) is reduced, and a multiplexer (MUX) plus a BUFFER structure is adopted as the programmable switch. Also, the method of offset lines and the method of complementary hanged end-lines are applied to the TILE routing circuit and the I/O routing circuit, respectively. All of the above features ensure that the whole FPGA chip is highly repeatable, and the signal delay is uniform and predictable over the total chip. Meanwhile, the BUFFER driver is optimized to decrease the signal delay by up to 5%. The proposed routing circuit is applied to the Fudan programmable device (FDP) FPGA, which has been taped out with an SMIC 0.18-μm logic 1P6M process. The test result shows that the programmable routing resource works correctly, and the signal delay over the chip is highly uniform and predictable.

  11. Toward a Fundamental Theory of Optimal Feature Selection: Part II-Implementation and Computational Complexit.

    PubMed

    Morgera, S D

    1987-01-01

    Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I of this work. The most computationally intensive processing is found in the classifier training mode wherein subsets of the largest and smallest eigenvalues and associated eigenvectors of the input data covariance pair must be computed. It is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form. Such a matrix structure is found in a wide variety of pattern recognition and speech and signal processing applications. Each of the parallel channels requires only two specialized matrix-arithmetic modules. These modules may be implemented as linear arrays of processing elements having at most O(N) elements where N is the input data vector dimension. The computations may be done in O(N) time steps. This compares favorably to O(N3) operations for a conventional, or general, rotation-based eigensystem solver and even the O(2N2) operations using an approach incorporating the fast Levinson algorithm for a matrix of Toeplitz structure since the underlying matrix in this work does not possess a Toeplitz structure. Some examples are provided on the convergence of a conventional iterative approach and a novel two-stage iterative method for eigensystem decomposition.

  12. Direct Methods for Predicting Movement Biomechanics Based Upon Optimal Control Theory with Implementation in OpenSim.

    PubMed

    Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G

    2016-08-01

    The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.

  13. Optimization of pre-emptive isolations in a polyvalent ICU through implementation of an intervention strategy.

    PubMed

    Álvarez Lerma, F; Granado Solano, J; García Sanz, A; López Martínez, C; Herrera Sebastián, R; Salvat Cobeta, C; Rey Pérez, A; Balaguer Blasco, R M; Plasencia, V; Horcajada, J P

    2015-12-01

    Pre-emptive isolation refers to the application of contact precaution measures in patients with strongly suspected colonization by multiresistant bacteria. To assess the impact of an intervention program involving the implementation of a consensus-based protocol of pre-emptive isolation (CPPI) on admission to a polyvalent ICU of a general hospital. A comparative analysis of 2 patient cohorts was made: a historical cohort including patients in which pre-emptive isolation was established according to physician criterion prior to starting CPPI (from January 2010 to February 2011), and a prospective cohort including patients in which CPPI was implemented (from March to November 2011). CPPI included the identification and diffusion of pre-emptive isolation criteria, the definition of sampling methodology, the evaluation of results, and the development of criteria for discontinuation of pre-emptive isolation. Pre-emptive isolation was indicated by the medical staff, and follow-up was conducted by the nursing staff. Pre-emptive isolation was defined as "adequate" when at least one multiresistant bacteria was identified in any of the samples. Comparison of data between the 2 periods was made with the chi-square test for categorical variables and the Student t-test for quantitative variables. Statistical significance was set at P<.05. Among the 1,740 patients admitted to the ICU (1,055 during the first period and 685 during the second period), pre-emptive isolation was indicated in 199 (11.4%); 111 (10.5%) of these subjects corresponded to the historical cohort (control group) and 88 (12.8%) to the posterior phase after the implementation of CPPI (intervention group). No differences were found in age, APACHE II score or patient characteristics between the 2 periods. The implementation of CPPI was related to decreases in non-indicated pre-emptive isolations (29.7 vs. 6.8%, P<.001), time of requesting surveillance cultures (1.56 vs. 0.37 days, P<.001), and days of duration of

  14. Optimization and Implementation of Scaling-Free CORDIC-Based Direct Digital Frequency Synthesizer for Body Care Area Network Systems

    PubMed Central

    Juang, Ying-Shen; Ko, Lu-Ting; Chen, Jwu-E.; Sung, Tze-Yun; Hsin, Hsi-Chin

    2012-01-01

    Coordinate rotation digital computer (CORDIC) is an efficient algorithm for computations of trigonometric functions. Scaling-free-CORDIC is one of the famous CORDIC implementations with advantages of speed and area. In this paper, a novel direct digital frequency synthesizer (DDFS) based on scaling-free CORDIC is presented. The proposed multiplier-less architecture with small ROM and pipeline data path has advantages of high data rate, high precision, high performance, and less hardware cost. The design procedure with performance and hardware analysis for optimization has also been given. It is verified by Matlab simulations and then implemented with field programmable gate array (FPGA) by Verilog. The spurious-free dynamic range (SFDR) is over 86.85 dBc, and the signal-to-noise ratio (SNR) is more than 81.12 dB. The scaling-free CORDIC-based architecture is suitable for VLSI implementations for the DDFS applications in terms of hardware cost, power consumption, SNR, and SFDR. The proposed DDFS is very suitable for medical instruments and body care area network systems. PMID:23251230

  15. Optimization and implementation of scaling-free CORDIC-based direct digital frequency synthesizer for body care area network systems.

    PubMed

    Juang, Ying-Shen; Ko, Lu-Ting; Chen, Jwu-E; Sung, Tze-Yun; Hsin, Hsi-Chin

    2012-01-01

    Coordinate rotation digital computer (CORDIC) is an efficient algorithm for computations of trigonometric functions. Scaling-free-CORDIC is one of the famous CORDIC implementations with advantages of speed and area. In this paper, a novel direct digital frequency synthesizer (DDFS) based on scaling-free CORDIC is presented. The proposed multiplier-less architecture with small ROM and pipeline data path has advantages of high data rate, high precision, high performance, and less hardware cost. The design procedure with performance and hardware analysis for optimization has also been given. It is verified by Matlab simulations and then implemented with field programmable gate array (FPGA) by Verilog. The spurious-free dynamic range (SFDR) is over 86.85 dBc, and the signal-to-noise ratio (SNR) is more than 81.12 dB. The scaling-free CORDIC-based architecture is suitable for VLSI implementations for the DDFS applications in terms of hardware cost, power consumption, SNR, and SFDR. The proposed DDFS is very suitable for medical instruments and body care area network systems.

  16. Regulatory and technical reports compilation for 1980

    SciTech Connect

    Oliu, W.E.; McKenzi, L.

    1981-04-01

    This compilation lists formal regulatory and technical reports and conference proceedings issued in 1980 by the US Nuclear Regulatory Commission. The compilation is divided into four major sections. The first major section consists of a sequential listing of all NRC reports in report-number order. The second major section of this compilation consists of a key-word index to report titles. The third major section contains an alphabetically arranged listing of contractor report numbers cross-referenced to their corresponding NRC report numbers. Finally, the fourth section is an errata supplement.

  17. Compiling Planning into Scheduling: A Sketch

    NASA Technical Reports Server (NTRS)

    Bedrax-Weiss, Tania; Crawford, James M.; Smith, David E.

    2004-01-01

    Although there are many approaches for compiling a planning problem into a static CSP or a scheduling problem, current approaches essentially preserve the structure of the planning problem in the encoding. In this pape: we present a fundamentally different encoding that more accurately resembles a scheduling problem. We sketch the approach and argue, based on an example, that it is possible to automate the generation of such an encoding for problems with certain properties and thus produce a compiler of planning into scheduling problems. Furthermore we argue that many NASA problems exhibit these properties and that such a compiler would provide benefits to both theory and practice.

  18. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  19. Optimized hierarchical equations of motion theory for Drude dissipation and efficient implementation to nonlinear spectroscopies.

    PubMed

    Ding, Jin-Jin; Xu, Jian; Hu, Jie; Xu, Rui-Xue; Yan, YiJing

    2011-10-28

    Hierarchical equations of motion theory for Drude dissipation is optimized, with a convenient convergence criterion proposed in advance of numerical propagations. The theoretical construction is on the basis of a Padé spectrum decomposition that has been qualified to be the best sum-over-poles scheme for quantum distribution function. The resulting hierarchical dynamics under the a priori convergence criterion are exemplified with a benchmark spin-boson system, and also the transient absorption and related coherent two-dimensional spectroscopy of a model exciton dimer system. We combine the present theory with several advanced techniques such as the block hierarchical dynamics in mixed Heisenberg-Schrödinger picture and the on-the-fly filtering algorithm for the efficient evaluation of third-order optical response functions.

  20. Optimization of the Coupled Cluster Implementation in NWChem on Petascale Parallel Architectures

    SciTech Connect

    Anisimov, Victor; Bauer, Gregory H.; Chadalavada, Kalyana; Olson, Ryan M.; Glenski, Joseph W.; Kramer, William T.; Apra, Edoardo; Kowalski, Karol

    2014-09-04

    Coupled cluster singles and doubles (CCSD) algorithm has been optimized in NWChem software package. This modification alleviated the communication bottleneck and provided from 2- to 5-fold speedup in the CCSD iteration time depending on the problem size and available memory. Sustained 0.60 petaflop/sec performance on CCSD(T) calculation has been obtained on NCSA Blue Waters. This number included all stages of the calculation from initialization till termination, iterative computation of single and double excitations, and perturbative accounting for triple excitations. In the section of perturbative triples alone, the computation maintained 1.18 petaflop/sec performance level. CCSD computations have been performed on Guanine-Cytosine deoxydinucleotide monophosphate (GC-dDMP) to probe the conformational energy difference in DNA single strand in A- and B-conformations. The computation revealed significant discrepancy between CCSD and classical force fields in prediction of relative energy of A- and B-conformations of GC-dDMP.

  1. Implementation and optimization of sub-pixel motion estimation on BWDSP platform

    NASA Astrophysics Data System (ADS)

    Jia, Shangzhu; Lang, Wenhui; Zeng, Feiyang; Liu, Yufu

    2017-08-01

    Sub-pixel Motion estimation algorithm is a key technology in video coding inter-frame prediction algorithm, which has important influence on video coding performance. In the latest video coding standard H.265/HEVC, interpolation filters based on DCT are used to Sub-pixel motion estimation, but it has very high computation complexity. In order to ensure the real-time performance of hardware coding, we combine the characteristics of BWDSP architecture, using code level optimization techniques to realize the sub-pixel motion estimation algorithm. Experimental results demonstrate that In the BWDSP simulation environment, the proposed method significantly decreases the running clock cycle and thus improves the performance of the encoder.

  2. Optimized FPGA Implementation of the Thyroid Hormone Secretion Mechanism Using CAD Tools.

    PubMed

    Alghazo, Jaafar M

    2017-02-01

    The goal of this paper is to implement the secretion mechanism of the Thyroid Hormone (TH) based on bio-mathematical differential eqs. (DE) on an FPGA chip. Hardware Descriptive Language (HDL) is used to develop a behavioral model of the mechanism derived from the DE. The Thyroid Hormone secretion mechanism is simulated with the interaction of the related stimulating and inhibiting hormones. Synthesis of the simulation is done with the aid of CAD tools and downloaded on a Field Programmable Gate Arrays (FPGAs) Chip. The chip output shows identical behavior to that of the designed algorithm through simulation. It is concluded that the chip mimics the Thyroid Hormone secretion mechanism. The chip, operating in real-time, is computer-independent stand-alone system.

  3. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  4. Automating Visualization Service Generation with the WATT Compiler

    NASA Astrophysics Data System (ADS)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web

  5. Operational and Strategic Implementation of Dynamic Line Rating for Optimized Wind Energy Generation Integration

    SciTech Connect

    Gentle, Jake Paul

    2016-12-01

    One primary goal of rendering today’s transmission grid “smarter” is to optimize and better manage its power transfer capacity in real time. Power transfer capacity is affected by three main elements: stability, voltage limits, and thermal ratings. All three are critical, but thermal ratings represent the greatest opportunity to quickly, reliably and economically utilize the grid’s true capacity. With the “Smarter Grid”, new solutions have been sought to give operators a better grasp on real time conditions, allowing them to manage and extend the usefulness of existing transmission infrastructure in a safe and reliable manner. The objective of the INL Wind Program is to provide industry a Dynamic Line Rating (DLR) solution that is state of the art as measured by cost, accuracy and dependability, to enable human operators to make informed decisions and take appropriate actions without human or system overloading and impacting the reliability of the grid. In addition to mitigating transmission line congestion to better integrate wind, DLR also offers the opportunity to improve the grid with optimized utilization of transmission lines to relieve congestion in general. As wind-generated energy has become a bigger part of the nation’s energy portfolio, researchers have learned that wind not only turns turbine blades to generate electricity, but can cool transmission lines and increase transfer capabilities significantly, sometimes up to 60 percent. INL’s DLR development supports EERE and The Wind Energy Technology Office’s goals by informing system planners and grid operators of available transmission capacity, beyond typical Static Line Ratings (SLR). SLRs are based on a fixed set of conservative environmental conditions to establish a limit on the amount of current lines can safely carry without overheating. Using commercially available weather monitors mounted on industry informed custom brackets developed by INL in combination with Computational

  6. Analytical and test equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A compilation is presented of innovations in testing and measuring technology for both the laboratory and industry. Topics discussed include spectrometers, radiometers, and descriptions of analytical and test equipment in several areas including thermodynamics, fluid flow, electronics, and materials testing.

  7. Testing methods and techniques: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Mechanical testing techniques, electrical and electronics testing techniques, thermal testing techniques, and optical testing techniques are the subject of the compilation which provides technical information and illustrations of advanced testing devices. Patent information is included where applicable.

  8. A Compilation of Internship Reports - 2012

    SciTech Connect

    Stegman M.; Morris, M.; Blackburn, N.

    2012-08-08

    This compilation documents all research project undertaken by the 2012 summer Department of Energy - Workforce Development for Teachers and Scientists interns during their internship program at Brookhaven National Laboratory.

  9. Extension of Alvis compiler front-end

    NASA Astrophysics Data System (ADS)

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr

    2015-12-01

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters' types and operations on them. Thanks to the compiler's modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  10. Evaluation of HDPE and LDPE degradation by fungus, implemented by statistical optimization

    PubMed Central

    Ojha, Nupur; Pradhan, Neha; Singh, Surjit; Barla, Anil; Shrivastava, Anamika; Khatua, Pradip; Rai, Vivek; Bose, Sutapa

    2017-01-01

    Plastic in any form is a nuisance to the well-being of the environment. The ‘pestilence’ caused by it is mainly due to its non-degradable nature. With the industrial boom and the population explosion, the usage of plastic products has increased. A steady increase has been observed in the use of plastic products, and this has accelerated the pollution. Several attempts have been made to curb the problem at large by resorting to both chemical and biological methods. Chemical methods have only resulted in furthering the pollution by releasing toxic gases into the atmosphere; whereas; biological methods have been found to be eco-friendly however they are not cost effective. This paves the way for the current study where fungal isolates have been used to degrade polyethylene sheets (HDPE, LDPE). Two potential fungal strains, namely, Penicillium oxalicum NS4 (KU559906) and Penicillium chrysogenum NS10 (KU559907) had been isolated and identified to have plastic degrading abilities. Further, the growth medium for the strains was optimized with the help of RSM. The plastic sheets were subjected to treatment with microbial culture for 90 days. The extent of degradation was analyzed by, FE-SEM, AFM and FTIR. Morphological changes in the plastic sheet were determined. PMID:28051105

  11. Development and implementation of a coupled computational muscle force optimization bone shape adaptation modeling method.

    PubMed

    Florio, C S

    2015-04-01

    Improved methods to analyze and compare the muscle-based influences that drive bone strength adaptation can aid in the understanding of the wide array of experimental observations about the effectiveness of various mechanical countermeasures to losses in bone strength that result from age, disuse, and reduced gravity environments. The coupling of gradient-based and gradientless numerical optimization routines with finite element methods in this work results in a modeling technique that determines the individual magnitudes of the muscle forces acting in a multisegment musculoskeletal system and predicts the improvement in the stress state uniformity and, therefore, strength, of a targeted bone through simulated local cortical material accretion and resorption. With a performance-based stopping criteria, no experimentally based or system-based parameters, and designed to include the direct and indirect effects of muscles attached to the targeted bone as well as to its neighbors, shape and strength alterations resulting from a wide range of boundary conditions can be consistently quantified. As demonstrated in a representative parametric study, the developed technique effectively provides a clearer foundation for the study of the relationships between muscle forces and the induced changes in bone strength. Its use can lead to the better control of such adaptive phenomena.

  12. Evaluation of HDPE and LDPE degradation by fungus, implemented by statistical optimization

    NASA Astrophysics Data System (ADS)

    Ojha, Nupur; Pradhan, Neha; Singh, Surjit; Barla, Anil; Shrivastava, Anamika; Khatua, Pradip; Rai, Vivek; Bose, Sutapa

    2017-01-01

    Plastic in any form is a nuisance to the well-being of the environment. The ‘pestilence’ caused by it is mainly due to its non-degradable nature. With the industrial boom and the population explosion, the usage of plastic products has increased. A steady increase has been observed in the use of plastic products, and this has accelerated the pollution. Several attempts have been made to curb the problem at large by resorting to both chemical and biological methods. Chemical methods have only resulted in furthering the pollution by releasing toxic gases into the atmosphere; whereas; biological methods have been found to be eco-friendly however they are not cost effective. This paves the way for the current study where fungal isolates have been used to degrade polyethylene sheets (HDPE, LDPE). Two potential fungal strains, namely, Penicillium oxalicum NS4 (KU559906) and Penicillium chrysogenum NS10 (KU559907) had been isolated and identified to have plastic degrading abilities. Further, the growth medium for the strains was optimized with the help of RSM. The plastic sheets were subjected to treatment with microbial culture for 90 days. The extent of degradation was analyzed by, FE-SEM, AFM and FTIR. Morphological changes in the plastic sheet were determined.

  13. Optimizing Oceanographic Big Data Browse and Visualization Response Times by Implementing the Lambda Architecture

    NASA Astrophysics Data System (ADS)

    Currier, R. D.; Howard, M.; Kirkpatrick, B. A.

    2016-02-01

    Visualizing large-scale data sets using standard web-based mapping tools can result in significant delays and response time issues for users. Load times for data sets comprised of millions of records can be in excess of thirty seconds when the data sets are served using traditional architectures and techniques. In this paper we demonstrate the efficiency gains created by utilizing the Lambda Architecture on a low velocity, high volume hypoxia-nutrient decision support system with 25M records. While traditionally employed on high velocity, high volume data we demonstrate significant improvements in data load times and the user browse experience on low velocity, high volume data. Optimizing query and visualization response times becomes increasingly important as data sets grow in size. Time series data from extended autonomous underwater vehicle deployments can exceed 500M records. Applying the Lambda Architecture to these data sets will allow users to browse, visualize and fuse data in a manner not possible using traditional methodologies.

  14. Evaluation of HDPE and LDPE degradation by fungus, implemented by statistical optimization.

    PubMed

    Ojha, Nupur; Pradhan, Neha; Singh, Surjit; Barla, Anil; Shrivastava, Anamika; Khatua, Pradip; Rai, Vivek; Bose, Sutapa

    2017-01-04

    Plastic in any form is a nuisance to the well-being of the environment. The 'pestilence' caused by it is mainly due to its non-degradable nature. With the industrial boom and the population explosion, the usage of plastic products has increased. A steady increase has been observed in the use of plastic products, and this has accelerated the pollution. Several attempts have been made to curb the problem at large by resorting to both chemical and biological methods. Chemical methods have only resulted in furthering the pollution by releasing toxic gases into the atmosphere; whereas; biological methods have been found to be eco-friendly however they are not cost effective. This paves the way for the current study where fungal isolates have been used to degrade polyethylene sheets (HDPE, LDPE). Two potential fungal strains, namely, Penicillium oxalicum NS4 (KU559906) and Penicillium chrysogenum NS10 (KU559907) had been isolated and identified to have plastic degrading abilities. Further, the growth medium for the strains was optimized with the help of RSM. The plastic sheets were subjected to treatment with microbial culture for 90 days. The extent of degradation was analyzed by, FE-SEM, AFM and FTIR. Morphological changes in the plastic sheet were determined.

  15. Electronic circuits for communications systems: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The compilation of electronic circuits for communications systems is divided into thirteen basic categories, each representing an area of circuit design and application. The compilation items are moderately complex and, as such, would appeal to the applications engineer. However, the rationale for the selection criteria was tailored so that the circuits would reflect fundamental design principles and applications, with an additional requirement for simplicity whenever possible.

  16. Systems test facilities existing capabilities compilation

    NASA Technical Reports Server (NTRS)

    Weaver, R.

    1981-01-01

    Systems test facilities (STFS) to test total photovoltaic systems and their interfaces are described. The systems development (SD) plan is compilation of existing and planned STFs, as well as subsystem and key component testing facilities. It is recommended that the existing capabilities compilation is annually updated to provide and assessment of the STF activity and to disseminate STF capabilities, status and availability to the photovoltaics program.

  17. Globin gene server: a prototype E-mail database server featuring extensive multiple alignments and data compilation for electronic genetic analysis.

    PubMed

    Hardison, R; Chao, K M; Schwartz, S; Stojanovic, N; Ganetsky, M; Miller, W

    1994-05-15

    The sequence of virtually the entire cluster of beta-like globin genes has been determined from several mammals, and many regulatory regions have been analyzed by mutagenesis, functional assays, and nuclear protein binding studies. This very large amount of sequence and functional data needs to be compiled in a readily accessible and usable manner to optimize data analysis, hypothesis testing, and model building. We report a Globin Gene Server that will provide this service in a constantly updated manner when fully implemented. The Server has two principal functions. The first (currently available) provides an annotated multiple alignment of the DNA sequences throughout the gene cluster from representatives of all species analyzed. The second compiles data on functional and protein binding assays throughout the gene cluster. A prototype of this compilation using the aligned 5' flanking region of beta-globin genes from five species shows examples of (1) well-conserved regions that have demonstrated functions, including cases in which the functional data are in apparent conflict, (2) proposed functional regions that are not well conserved, and (3) conserved regions with no currently assigned function. Such an electronic genetic analysis leads to many readily testable hypotheses that were not immediately apparent without the multiple alignment and compilation. The Server is accessible via E-mail on computer networks, and printed results can be obtained by request to the authors. This prototype will be a helpful guide for developing similar tools for many genomic loci.

  18. Optimization of semi-global stereo matching for hardware module implementation

    NASA Astrophysics Data System (ADS)

    Roszkowski, Mikołaj

    2014-11-01

    Stereo vision is one of the most intensively studied areas in the field of computer vision. It allows the creation of a 3D model of a scene given two images of the scene taken with optical cameras. Although the number of stereo algorithms keeps increasing, not many are suitable candidates for hardware implementations that could guarantee real-time processing in embedded systems. One of such algorithms is semi-global matching, which seems to balance well the quality of the disparity map and computational complexity. However, it still has quite high memory requirements, which can be a problem if the low-cost FPGAs are to be used. This is because they often suffer from a low external DRAM memory throughput. In this article, a few methods to reduce both the semi-global matching algorithm complexity and memory usage, and thus required bandwidth, are proposed. First of all, it is shown that a simple pyramid matching scheme can be used to efficiently reduce the number of disparities checked per pixel. Secondly, a method of dividing the image into independent blocks is proposed, which allows the reduction of the amount of memories required by the algorithm. Finally the exact requirements for the bandwidth and the size of the on-chip memories are given.

  19. The Columbia-Presbyterian Medical Center decision-support system as a model for implementing the Arden Syntax.

    PubMed Central

    Hripcsak, G.; Cimino, J. J.; Johnson, S. B.; Clayton, P. D.

    1991-01-01

    Columbia-Presbyterian Medical Center is implementing a decision-support system based on the Arden Syntax for Medical Logic Modules (MLM's). The system uses a compiler-interpreter pair. MLM's are first compiled into pseudo-codes, which are instructions for a virtual machine. The MLM's are then executed using an interpreter that emulates the virtual machine. This design has resulted in increased portability, easier debugging and verification, and more compact compiled MLM's. The time spent interpreting the MLM pseudo-codes has been found to be insignificant compared to database accesses. The compiler, which is written using the tools "lex" and "yacc," optimizes MLM's by minimizing the number of database accesses. The interpreter emulates a stack-oriented machine. A phased implementation of the syntax was used to speed the development of the system. PMID:1807598

  20. Optimizing the business and IT relationship--a structured approach to implementing a business relationship management framework.

    PubMed

    Mohrmann, Gregg; Kraatz, Drew; Sessa, Bonnie

    2009-01-01

    The relationship between the business and the IT organization is an area where many healthcare providers experience challenges. IT is often perceived as a service provider rather than a partner in delivering quality patient care. Organizations are finding that building a stronger partnership between business and IT leads to increased understanding and appreciation of the technology, process changes and services that can enhance the delivery of care and maximize organizational success. This article will provide a detailed description of valuable techniques for optimizing the healthcare organization's business and IT relationship; considerations on how to implement those techniques; and a description of the key benefits an organization should realize. Using a case study of a healthcare provider that leveraged these techniques, the article will show how an organization can promote this paradigm shift and create a tighter integration between the business and IT.

  1. Optimism

    PubMed Central

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  2. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different

  3. Optimization of the Implementation of Managed Aquifer Recharge - Effects of Aquifer Heterogeneity

    NASA Astrophysics Data System (ADS)

    Maliva, Robert; Missimer, Thomas; Kneppers, Angeline

    2010-05-01

    more successful MAR implementation as a tool for improved water resources management.

  4. Implementation of spot scanning dose optimization and dose calculation for helium ions in Hyperion.

    PubMed

    Fuchs, Hermann; Alber, Markus; Schreiner, Thomas; Georg, Dietmar

    2015-09-01

    Helium ions ((4)He) may supplement current particle beam therapy strategies as they possess advantages in physical dose distribution over protons. To assess potential clinical advantages, a dose calculation module accounting for relative biological effectiveness (RBE) was developed and integrated into the treatment planning system Hyperion. Current knowledge on RBE of (4)He together with linear energy transfer considerations motivated an empirical depth-dependent "zonal" RBE model. In the plateau region, a RBE of 1.0 was assumed, followed by an increasing RBE up to 2.8 at the Bragg-peak region, which was then kept constant over the fragmentation tail. To account for a variable proton RBE, the same model concept was also applied to protons with a maximum RBE of 1.6. Both RBE models were added to a previously developed pencil beam algorithm for physical dose calculation and included into the treatment planning system Hyperion. The implementation was validated against Monte Carlo simulations within a water phantom using γ-index evaluation. The potential benefits of (4)He based treatment plans were explored in a preliminary treatment planning comparison (against protons) for four treatment sites, i.e., a prostate, a base-of-skull, a pediatric, and a head-and-neck tumor case. Separate treatment plans taking into account physical dose calculation only or using biological modeling were created for protons and (4)He. Comparison of Monte Carlo and Hyperion calculated doses resulted in a γ mean of 0.3, with 3.4% of the values above 1 and γ 1% of 1.5 and better. Treatment plan evaluation showed comparable planning target volume coverage for both particles, with slightly increased coverage for (4)He. Organ at risk (OAR) doses were generally reduced using (4)He, some by more than to 30%. Improvements of (4)He over protons were more pronounced for treatment plans taking biological effects into account. All OAR doses were within tolerances specified in the QUANTEC report. The

  5. Implementation of spot scanning dose optimization and dose calculation for helium ions in Hyperion

    SciTech Connect

    Fuchs, Hermann; Schreiner, Thomas; Georg, Dietmar

    2015-09-15

    Purpose: Helium ions ({sup 4}He) may supplement current particle beam therapy strategies as they possess advantages in physical dose distribution over protons. To assess potential clinical advantages, a dose calculation module accounting for relative biological effectiveness (RBE) was developed and integrated into the treatment planning system Hyperion. Methods: Current knowledge on RBE of {sup 4}He together with linear energy transfer considerations motivated an empirical depth-dependent “zonal” RBE model. In the plateau region, a RBE of 1.0 was assumed, followed by an increasing RBE up to 2.8 at the Bragg-peak region, which was then kept constant over the fragmentation tail. To account for a variable proton RBE, the same model concept was also applied to protons with a maximum RBE of 1.6. Both RBE models were added to a previously developed pencil beam algorithm for physical dose calculation and included into the treatment planning system Hyperion. The implementation was validated against Monte Carlo simulations within a water phantom using γ-index evaluation. The potential benefits of {sup 4}He based treatment plans were explored in a preliminary treatment planning comparison (against protons) for four treatment sites, i.e., a prostate, a base-of-skull, a pediatric, and a head-and-neck tumor case. Separate treatment plans taking into account physical dose calculation only or using biological modeling were created for protons and {sup 4}He. Results: Comparison of Monte Carlo and Hyperion calculated doses resulted in a γ{sub mean} of 0.3, with 3.4% of the values above 1 and γ{sub 1%} of 1.5 and better. Treatment plan evaluation showed comparable planning target volume coverage for both particles, with slightly increased coverage for {sup 4}He. Organ at risk (OAR) doses were generally reduced using {sup 4}He, some by more than to 30%. Improvements of {sup 4}He over protons were more pronounced for treatment plans taking biological effects into account. All

  6. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  7. Analysis, optimization, and implementation of a hybrid DS/FFH spread-spectrum technique for smart grid communications

    SciTech Connect

    Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; Kuruganti, Teja; Smith, Stephen F.; Djouadi, Seddik M.

    2015-03-12

    In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. In this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.

  8. Microscopically-Based Energy Density Functionals for Nuclei Using the Density Matrix Expansion. I: Implementation and Pre-Optimization

    SciTech Connect

    Stoitsov, M. V.; Kortelainen, Erno M; Bogner, S. K.; Duguet, T.; Furnstahl, R. J.; Gebremariam, B.; Schunck, N.

    2010-01-01

    In a recent series of papers, Gebremariam, Bogner, and Duguet derived a microscopically-based nuclear energy density functional by applying the Density Matrix Expansion (DME) to the Hartree-Fock energy obtained from chiral effective field theory (EFT) two- and three-nucleon interactions. Due to the structure of the chiral interactions, each coupling in the DME functional is given as the sum of a coupling constant arising from zero-range contact interactions and a coupling function of the density arising from the finite-range pion exchanges. Since the contact contributions have essentially the same structure as those entering empirical Skyrme functionals, a microscopically guided Skyrme phenomenology has been suggested in which the contact terms in the DME functional are released for optimization to finite-density observables to capture short-range correlation energy contributions from beyond Hartree-Fock. The present paper is the first attempt to assess the ability of the newly suggested DME functional, which has a much richer set of density dependencies than traditional Skyrme functionals, to generate sensible and stable results for nuclear applications. The results of the first proof-of-principle calculations are given, and numerous practical issues related to the implementation of the new functional in existing Skyrme codes are discussed. Using a restricted singular value decomposition (SVD) optimization procedure, it is found that the new DME functional gives numerically stable results and exhibits a small but systematic reduction in {chi}^{2} compared to standard Skyrme functionals, thus justifying its suitability for future global optimizations and large-scale calculations.

  9. Microscopically based energy density functionals for nuclei using the density matrix expansion: Implementation and pre-optimization

    SciTech Connect

    Stoitsov, M.; Kortelainen, M.; Schunck, N.; Bogner, S. K.; Gebremariam, B.; Duguet, T.

    2010-11-15

    In a recent series of articles, Gebremariam, Bogner, and Duguet derived a microscopically based nuclear energy density functional by applying the density matrix expansion (DME) to the Hartree-Fock energy obtained from chiral effective field theory two- and three-nucleon interactions. Owing to the structure of the chiral interactions, each coupling in the DME functional is given as the sum of a coupling constant arising from zero-range contact interactions and a coupling function of the density arising from the finite-range pion exchanges. Because the contact contributions have essentially the same structure as those entering empirical Skyrme functionals, a microscopically guided Skyrme phenomenology has been suggested in which the contact terms in the DME functional are released for optimization to finite-density observables to capture short-range correlation energy contributions from beyond Hartree-Fock. The present article is the first attempt to assess the ability of the newly suggested DME functional, which has a much richer set of density dependencies than traditional Skyrme functionals, to generate sensible and stable results for nuclear applications. The results of the first proof-of-principle calculations are given, and numerous practical issues related to the implementation of the new functional in existing Skyrme codes are discussed. Using a restricted singular value decomposition optimization procedure, it is found that the new DME functional gives numerically stable results and exhibits a small but systematic reduction of our test {chi}{sup 2} function compared to standard Skyrme functionals, thus justifying its suitability for future global optimizations and large-scale calculations.

  10. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  11. Compiled MPI: Cost-Effective Exascale Applications Development

    SciTech Connect

    Bronevetsky, G; Quinlan, D; Lumsdaine, A; Hoefler, T

    2012-04-10

    's lifetime. It includes: (1) New set of source code annotations, inserted either manually or automatically, that will clarify the application's use of MPI to the compiler infrastructure, enabling greater accuracy where needed; (2) A compiler transformation framework that leverages these annotations to transform the original MPI source code to improve its performance and scalability; (3) Novel MPI runtime implementation techniques that will provide a rich set of functionality extensions to be used by applications that have been transformed by our compiler; and (4) A novel compiler analysis that leverages simple user annotations to automatically extract the application's communication structure and synthesize most complex code annotations.

  12. Compiling high-level languages for configurable computers: applying lessons from heterogeneous processing

    NASA Astrophysics Data System (ADS)

    Weaver, Glen E.; Weems, Charles C.; McKinley, Kathryn S.

    1996-10-01

    Configurable systems offer increased performance by providing hardware that matches the computational structure of a problem. This hardware is currently programmed with CAD tools and explicit library calls. To attain widespread acceptance, configurable computing must become transparently accessible from high-level programming languages, but the changeable nature of the target hardware presents a major challenge to traditional compiler technology. A compiler for a configurable computer should optimize the use of functions embedded in hardware and schedule hardware reconfigurations. The hurdles to be overcome in achieving this capability are similar in some ways to those facing compilation for heterogeneous systems. For example, current traditional compilers have neither an interface to accept new primitive operators, nor a mechanism for applying optimizations to new operators. We are building a compiler for heterogeneous computing, called Scale, which replaces the traditional monolithic compiler architecture with a flexible framework. Scale has three main parts: translation director, compilation library, and a persistent store which holds our intermediate representation as well as other data structures. The translation director exploits the framework's flexibility by using architectural information to build a plan to direct each compilation. The translation library serves as a toolkit for use by the translation director. Our compiler intermediate representation, Score, facilities the addition of new IR nodes by distinguishing features used in defining nodes from properties on which transformations depend. In this paper, we present an overview of the scale architecture and its capabilities for dealing with heterogeneity, followed by a discussion of how those capabilities apply to problems in configurable computing. We then address aspects of configurable computing that are likely to require extensions to our approach and propose some extensions.

  13. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    Solving & Extracting Solution Plan Figure 1: The main steps in our approach of using quantum annealer to solve a planning problem. sults from an early... annealer would return solutions in exactly the same way it does when it fails to find a schedule when it exists.) For each embedded instance, once we...run, we check how many times the ground state energy was obtained, which gives us the probability of solution r for a 20 µsec anneal . We then com

  14. Compilation and Environment Optimizations for LogLisp.

    DTIC Science & Technology

    1984-07-01

    Chief, Comand & Control Division .,! FOR THE COMER: DONALD A. BRANTINGRAM Plans Office If your address has changed or if you wish to be removed from the...Moore. "The Sharing of Structure in Theorem-proving Programs," in B. Meltzer and D. Michie (eds), Machine Intelligence VII, John Wiley. [Bruynooghe 821

  15. A small evaluation suite for Ada compilers

    NASA Technical Reports Server (NTRS)

    Wilke, Randy; Roy, Daniel M.

    1986-01-01

    After completing a small Ada pilot project (OCC simulator) for the Multi Satellite Operations Control Center (MSOCC) at Goddard last year, the use of Ada to develop OCCs was recommended. To help MSOCC transition toward Ada, a suite of about 100 evaluation programs was developed which can be used to assess Ada compilers. These programs compare the overall quality of the compilation system, compare the relative efficiencies of the compilers and the environments in which they work, and compare the size and execution speed of generated machine code. Another goal of the benchmark software was to provide MSOCC system developers with rough timing estimates for the purpose of predicting performance of future systems written in Ada.

  16. Compilation of data on elementary particles

    SciTech Connect

    Trippe, T.G.

    1984-09-01

    The most widely used data compilation in the field of elementary particle physics is the Review of Particle Properties. The origin, development and current state of this compilation are described with emphasis on the features which have contributed to its success: active involvement of particle physicists; critical evaluation and review of the data; completeness of coverage; regular distribution of reliable summaries including a pocket edition; heavy involvement of expert consultants; and international collaboration. The current state of the Review and new developments such as providing interactive access to the Review's database are described. Problems and solutions related to maintaining a strong and supportive relationship between compilation groups and the researchers who produce and use the data are discussed.

  17. Extension of Alvis compiler front-end

    SciTech Connect

    Wypych, Michał; Szpyrka, Marcin; Matyasik, Piotr E-mail: mszpyrka@agh.edu.pl

    2015-12-31

    Alvis is a formal modelling language that enables possibility of verification of distributed concurrent systems. An Alvis model semantics finds expression in an LTS graph (labelled transition system). Execution of any language statement is expressed as a transition between formally defined states of such a model. An LTS graph is generated using a middle-stage Haskell representation of an Alvis model. Moreover, Haskell is used as a part of the Alvis language and is used to define parameters’ types and operations on them. Thanks to the compiler’s modular construction many aspects of compilation of an Alvis model may be modified. Providing new plugins for Alvis Compiler that support languages like Java or C makes possible using these languages as a part of Alvis instead of Haskell. The paper presents the compiler internal model and describes how the default specification language can be altered by new plugins.

  18. Machine tools and fixtures: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    As part of NASA's Technology Utilizations Program, a compilation was made of technological developments regarding machine tools, jigs, and fixtures that have been produced, modified, or adapted to meet requirements of the aerospace program. The compilation is divided into three sections that include: (1) a variety of machine tool applications that offer easier and more efficient production techniques; (2) methods, techniques, and hardware that aid in the setup, alignment, and control of machines and machine tools to further quality assurance in finished products: and (3) jigs, fixtures, and adapters that are ancillary to basic machine tools and aid in realizing their greatest potential.

  19. COMPILATION OF CURRENT HIGH ENERGY PHYSICS EXPERIMENTS

    SciTech Connect

    Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.; Horne, C.P.; Hutchinson, M.S.; Rittenberg, A.; Trippe, T.G.; Yost, G.P.; Addis, L.; Ward, C.E.W.; Baggett, N.; Goldschmidt-Clermong, Y.; Joos, P.; Gelfand, N.; Oyanagi, Y.; Grudtsin, S.N.; Ryabov, Yu.G.

    1981-05-01

    This is the fourth edition of our compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about April 1981, and (2) had not completed taking of data by 1 January 1977. We emphasize that only approved experiments are included.

  20. Defining the Optimal Surgeon Experience for Breast Cancer Sentinel Lymph Node Biopsy: A Model for Implementation of New Surgical Techniques

    PubMed Central

    McMasters, Kelly M.; Wong, Sandra L.; Chao, Celia; Woo, Claudine; Tuttle, Todd M.; Noyes, R. Dirk; Carlson, David J.; Laidley, Alison L.; McGlothin, Terre Q.; Ley, Philip B.; Brown, C. Matthew; Glaser, Rebecca L.; Pennington, Robert E.; Turk, Peter S.; Simpson, Diana; Edwards, Michael J.

    2001-01-01

    Objective To determine the optimal experience required to minimize the false-negative rate of sentinel lymph node (SLN) biopsy for breast cancer. Summary Background Data Before abandoning routine axillary dissection in favor of SLN biopsy for breast cancer, each surgeon and institution must document acceptable SLN identification and false-negative rates. Although some studies have examined the impact of individual surgeon experience on the SLN identification rate, minimal data exist to determine the optimal experience required to minimize the more crucial false-negative rate. Methods Analysis was performed of a large prospective multiinstitutional study involving 226 surgeons. SLN biopsy was performed using blue dye, radioactive colloid, or both. SLN biopsy was performed with completion axillary LN dissection in all patients. The impact of surgeon experience on the SLN identification and false-negative rates was examined. Logistic regression analysis was performed to evaluate independent factors in addition to surgeon experience associated with these outcomes. Results A total of 2,148 patients were enrolled in the study. Improvement in the SLN identification and false-negative rates was found after 20 cases had been performed. Multivariate analysis revealed that patient age, nonpalpable tumors, and injection of blue dye alone for SLN biopsy were independently associated with decreased SLN identification rates, whereas upper outer quadrant tumor location was the only factor associated with an increased false-negative rate. Conclusions Surgeons should perform at least 20 SLN cases with acceptable results before abandoning routine axillary dissection. This study provides a model for surgeon training and experience that may be applicable to the implementation of other new surgical technologies. PMID:11524582

  1. Implementation of CFD modeling in the performance assessment and optimization of secondary clarifiers: the PVSC case study.

    PubMed

    Xanthos, S; Ramalingam, K; Lipke, S; McKenna, B; Fillos, J

    2013-01-01

    The water industry and especially the wastewater treatment sector has come under steadily increasing pressure to optimize their existing and new facilities to meet their discharge limits and reduce overall cost. Gravity separation of solids, producing clarified overflow and thickened solids underflow has long been one of the principal separation processes used in treating secondary effluent. Final settling tanks (FSTs) are a central link in the treatment process and often times act as the limiting step to the maximum solids handling capacity when high throughput requirements need to be met. The Passaic Valley Sewerage Commission (PVSC) is interested in using a computational fluid dynamics (CFD) modeling approach to explore any further FST retrofit alternatives to sustain significantly higher plant influent flows, especially under wet weather conditions. In detail there is an interest in modifying and/or upgrading/optimizing the existing FSTs to handle flows in the range of 280-720 million gallons per day (MGD) (12.25-31.55 m(3)/s) in compliance with the plant's effluent discharge limits for total suspended solids (TSS). The CFD model development for this specific plant will be discussed, 2D and 3D simulation results will be presented and initial results of a sensitivity study between two FST effluent weir structure designs will be reviewed at a flow of 550 MGD (∼24 m(3)/s) and 1,800 mg/L MLSS (mixed liquor suspended solids). The latter will provide useful information in determining whether the existing retrofit of one of the FSTs would enable compliance under wet weather conditions and warrants further consideration for implementing it in the remaining FSTs.

  2. Analysis, optimization, and implementation of a hybrid DS/FFH spread-spectrum technique for smart grid communications

    DOE PAGES

    Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; ...

    2015-03-12

    In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less

  3. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    SciTech Connect

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is often complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.

  4. AC Optimal Power Flow

    SciTech Connect

    2016-10-04

    In this work, we have implemented and developed the simulation software to implement the mathematical model of an AC Optimal Power Flow (OPF) problem. The objective function is to minimize the total cost of generation subject to constraints of node power balance (both real and reactive) and line power flow limits (MW, MVAr, and MVA). We have currently implemented the polar coordinate version of the problem. In the present work, we have used the optimization solver, Knitro (proprietary and not included in this software) to solve the problem and we have kept option for both the native numerical derivative evaluation (working satisfactorily now) as well as for analytical formulas corresponding to the derivatives being provided to Knitro (currently, in the debugging stage). Since the AC OPF is a highly non-convex optimization problem, we have also kept the option for a multistart solution. All of these can be decided by the user during run-time in an interactive manner. The software has been developed in C++ programming language, running with GCC compiler on a Linux machine. We have tested for satisfactory results against Matpower for the IEEE 14 bus system.

  5. Medical History: Compiling Your Medical Family Tree

    MedlinePlus

    ... history. Or, you can compile your family's health history on your computer or in a paper file. If you encounter reluctance from your family, consider these strategies: Share your ... have a family history of certain diseases or health conditions. Offer to ...

  6. Electronic test and calibration circuits, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.

  7. Heat Transfer and Thermodynamics: a Compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Studies include theories and mechanical considerations in the transfer of heat and the thermodynamic properties of matter and the causes and effects of certain interactions.

  8. Compilation of State Alternate Assessment Participation Guidelines

    ERIC Educational Resources Information Center

    Ly, Thuy

    2004-01-01

    The purpose of this compilation of alternate assessment participation guidelines is two fold: to provide a resource for states and to establish a data source and baseline for the evolution of such guidelines. This report is primarily intended for staff of the State Departments of Education in Delaware, Kentucky, Maryland, North Carolina, South…

  9. Note on Conditional Compilation in Standard ML

    DTIC Science & Technology

    1993-06-01

    eOmputer-Science No-te on Coridhitiom Cominliati"I~n Standard ML1 Nicholas Haines Edoardo Biagioni Robert Hiarper mom Brian G. Mimnes June 1993 CMU...CS-93. 11 TIC ELECTE f 00..7733 %goo~~OO Note on Conditioual Compilation in Standard ML Nicholas Haines Edoardo Biagioni Robert Harper Brian G. Milnes

  10. Safety and maintenance engineering: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A compilation is presented for the dissemination of information on technological developments which have potential utility outside the aerospace and nuclear communities. Safety of personnel engaged in the handling of hazardous materials and equipment, protection of equipment from fire, high wind, or careless handling by personnel, and techniques for the maintenance of operating equipment are reported.

  11. The New Southern FIA Data Compilation System

    Treesearch

    V. Clark Baldwin; Larry Royer

    2001-01-01

    In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...

  12. The dc power circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A compilation of reports concerning power circuits is presented for the dissemination of aerospace information to the general public as part of the NASA Technology Utilization Program. The descriptions for the electronic circuits are grouped as follows: dc power supplies, power converters, current-voltage power supply regulators, overload protection circuits, and dc constant current power supplies.

  13. Communications techniques and equipment: A compilation

    NASA Technical Reports Server (NTRS)

    1975-01-01

    This Compilation is devoted to equipment and techniques in the field of communications. It contains three sections. One section is on telemetry, including articles on radar and antennas. The second section describes techniques and equipment for coding and handling data. The third and final section includes descriptions of amplifiers, receivers, and other communications subsystems.

  14. Electronic switches and control circuits: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The innovations in this updated series of compilations dealing with electronic technology represents a carefully selected collection of items on electronic switches and control circuits. Most of the items are based on well-known circuit design concepts that have been simplified or refined to meet NASA's demanding requirement for reliability, simplicity, fail-safe characteristics, and the capability of withstanding environmental extremes.

  15. Proving Correctness for Pointer Programs in a Verifying Compiler

    NASA Technical Reports Server (NTRS)

    Kulczycki, Gregory; Singh, Amrinder

    2008-01-01

    This research describes a component-based approach to proving the correctness of programs involving pointer behavior. The approach supports modular reasoning and is designed to be used within the larger context of a verifying compiler. The approach consists of two parts. When a system component requires the direct manipulation of pointer operations in its implementation, we implement it using a built-in component specifically designed to capture the functional and performance behavior of pointers. When a system component requires pointer behavior via a linked data structure, we ensure that the complexities of the pointer operations are encapsulated within the data structure and are hidden to the client component. In this way, programs that rely on pointers can be verified modularly, without requiring special rules for pointers. The ultimate objective of a verifying compiler is to prove-with as little human intervention as possible-that proposed program code is correct with respect to a full behavioral specification. Full verification for software is especially important for an agency like NASA that is routinely involved in the development of mission critical systems.

  16. Runtime support and compilation methods for user-specified data distributions

    NASA Technical Reports Server (NTRS)

    Ponnusamy, Ravi; Saltz, Joel; Choudhury, Alok; Hwang, Yuan-Shin; Fox, Geoffrey

    1993-01-01

    This paper describes two new ideas by which an HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow use of program arrays to describe graph connectivity, spatial location of array elements, and computational load. The second mechanism is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed information from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance results for these mechanisms from a Fortran 90D compiler implementation.

  17. The Fault Tree Compiler (FTC): Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Martensen, Anna L.

    1989-01-01

    The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, AND m OF n gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precisely (within the limits of double precision floating point arithmetic) within a user specified number of digits accuracy. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Equipment Corporation (DEC) VAX computer with the VMS operation system.

  18. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  19. Optimizing Suicide Prevention Programs and Their Implementation in Europe (OSPI Europe): an evidence-based multi-level approach

    PubMed Central

    2009-01-01

    Background Suicide and non-fatal suicidal behaviour are significant public health issues in Europe requiring effective preventive interventions. However, the evidence for effective preventive strategies is scarce. The protocol of a European research project to develop an optimized evidence based program for suicide prevention is presented. Method The groundwork for this research has been established by a regional community based intervention for suicide prevention that focuses on improving awareness and care for depression performed within the European Alliance Against Depression (EAAD). The EAAD intervention consists of (1) training sessions and practice support for primary care physicians,(2) public relations activities and mass media campaigns, (3) training sessions for community facilitators who serve as gatekeepers for depressed and suicidal persons in the community and treatment and (4) outreach and support for high risk and self-help groups (e.g. helplines). The intervention has been shown to be effective in reducing suicidal behaviour in an earlier study, the Nuremberg Alliance Against Depression. In the context of the current research project described in this paper (OSPI-Europe) the EAAD model is enhanced by other evidence based interventions and implemented simultaneously and in standardised way in four regions in Ireland, Portugal, Hungary and Germany. The enhanced intervention will be evaluated using a prospective controlled design with the primary outcomes being composite suicidal acts (fatal and non-fatal), and with intermediate outcomes being the effect of training programs, changes in public attitudes, guideline-consistent media reporting. In addition an analysis of the economic costs and consequences will be undertaken, while a process evaluation will monitor implementation of the interventions within the different regions with varying organisational and healthcare contexts. Discussion This multi-centre research seeks to overcome major challenges

  20. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  1. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    SciTech Connect

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  2. Optimization of multisource information fusion for resource management with remote sensing imagery: an aggregate regularization method with neural network implementation

    NASA Astrophysics Data System (ADS)

    Shkvarko, Yuriy, IV; Butenko, Sergiy

    2006-05-01

    We address a new approach to the problem of improvement of the quality of multi-grade spatial-spectral images provided by several remote sensing (RS) systems as required for environmental resource management with the use of multisource RS data. The problem of multi-spectral reconstructive imaging with multisource information fusion is stated and treated as an aggregated ill-conditioned inverse problem of reconstruction of a high-resolution image from the data provided by several sensor systems that employ the same or different image formation methods. The proposed fusionoptimization technique aggregates the experiment design regularization paradigm with neural-network-based implementation of the multisource information fusion method. The maximum entropy (ME) requirement and projection regularization constraints are posed as prior knowledge for fused reconstruction and the experiment-design regularization methodology is applied to perform the optimization of multisource information fusion. Computationally, the reconstruction and fusion are accomplished via minimization of the energy function of the proposed modified multistate Hopfield-type neural network (NN) that integrates the model parameters of all systems incorporating a priori information, aggregate multisource measurements and calibration data. The developed theory proves that the designed maximum entropy neural network (MENN) is able to solve the multisource fusion tasks without substantial complication of its computational structure independent on the number of systems to be fused. For each particular case, only the proper adjustment of the MENN's parameters (i.e. interconnection strengths and bias inputs) should be accomplished. Simulation examples are presented to illustrate the good overall performance of the fused reconstruction achieved with the developed MENN algorithm applied to the real-world multi-spectral environmental imagery.

  3. Compilation of Physicochemical and Toxicological Information ...

    EPA Pesticide Factsheets

    The purpose of this product is to make accessible the information about the 1,173 hydraulic fracturing-related chemicals that were listed in the external review draft of the Hydraulic Fracturing Drinking Water Assessment that was released recently. The product consists of a series of spreadsheets with physicochemical and toxicological information pulled from several sources of information, including: EPI Suite, LeadScope, QikiProp, Reaxys, IRIS, PPRTV, ATSDR, among other sources. The spreadsheets also contain background information about how the list of chemicals were compiled, what the different sources of chemical information are, and definitions and descriptions of the values presented. The purpose of this product is to compile and make accessible information about the 1,173 hydraulic fracturing-related chemicals listed in the external review draft of the Hydraulic Fracturing Drinking Water Assessment.

  4. Compilation of DNA sequences of Escherichia coli

    PubMed Central

    Kröger, Manfred

    1989-01-01

    We have compiled the DNA sequence data for E.coli K12 available from the GENBANK and EMBO databases and over a period of several years independently from the literature. We have introduced all available genetic map data and have arranged the sequences accordingly. As far as possible the overlaps are deleted and a total of 940,449 individual bp is found to be determined till the beginning of 1989. This corresponds to a total of 19.92% of the entire E.coli chromosome consisting of about 4,720 kbp. This number may actually be higher by some extra 2% derived from the sequence of lysogenic bacteriophage lambda and the various insertion sequences. This compilation may be available in machine readable form from one of the international databanks in some future. PMID:2654890

  5. Using MaxCompiler for the high level synthesis of trigger algorithms

    NASA Astrophysics Data System (ADS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  6. 1991 OCRWM bulletin compilation and index

    SciTech Connect

    1992-05-01

    The OCRWM Bulletin is published by the Department of Energy, Office of Civilian Radioactive Waste Management, to provide current information about the national program for managing spent fuel and high-level radioactive waste. The document is a compilation of issues from the 1991 calendar year. A table of contents and an index have been provided to reference information contained in this year`s Bulletins.

  7. System, apparatus and methods to implement high-speed network analyzers

    DOEpatents

    Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E

    2015-11-10

    Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.

  8. A compiler and validator for flight operations on NASA space missions

    NASA Astrophysics Data System (ADS)

    Fonte, Sergio; Politi, Romolo; Capria, Maria Teresa; Giardino, Marco; De Sanctis, Maria Cristina

    2016-07-01

    In NASA missions the management and the programming of the flight systems is performed by a specific scripting language, the SASF (Spacecraft Activity Sequence File). In order to perform a check on the syntax and grammar it is necessary a compiler that stress the errors (eventually) found in the sequence file produced for an instrument on board the flight system. In our experience on Dawn mission, we developed VIRV (VIR Validator), a tool that performs checks on the syntax and grammar of SASF, runs a simulations of VIR acquisitions and eventually finds violation of the flight rules of the sequences produced. The project of a SASF compiler (SSC - Spacecraft Sequence Compiler) is ready to have a new implementation: the generalization for different NASA mission. In fact, VIRV is a compiler for a dialect of SASF; it includes VIR commands as part of SASF language. Our goal is to produce a general compiler for the SASF, in which every instrument has a library to be introduced into the compiler. The SSC can analyze a SASF, produce a log of events, perform a simulation of the instrument acquisition and check the flight rules for the instrument selected. The output of the program can be produced in GRASS GIS format and may help the operator to analyze the geometry of the acquisition.

  9. Efficient RTL-based code generation for specified DSP C-compiler

    NASA Astrophysics Data System (ADS)

    Pan, Qiaohai; Liu, Peng; Shi, Ce; Yao, Qingdong; Zhu, Shaobo; Yan, Li; Zhou, Ying; Huang, Weibing

    2001-12-01

    A C-compiler is a basic tool for most embedded systems programmers. It is the tool by which the ideas and algorithms in your application (expressed as C source code) are transformed into machine code executable by the target processor. Our research was to develop an optimizing C-compiler for a specified 16-bit DSP. As one of the most important part in the C-compiler, Code Generation's efficiency and performance directly affect to the resultant target assembly code. Thus, in order to improve the performance of the C-compiler, we constructed an efficient code generation based on RTL, an intermediate language used in GNU CC. The code generation accepts RTL as main input, takes good advantage of features specific to RTL and specified DSP's architecture, and generates compact assembly code of the specified DSP. In this paper, firstly, the features of RTL will be briefly introduced. Then, the basic principle of constructing the code generation will be presented in detail. According to the basic principle, this paper will discuss the architecture of the code generation, including: syntax tree construction / reconstruction, basic RTL instruction extraction, behavior description at RTL level, and instruction description at assembly level. The optimization strategies used in the code generation for generating compact assembly code will also be given in this paper. Finally, we will achieve the conclusion that the C-compiler using this special code generation achieved high efficiency we expected.

  10. Current status of the HAL/S compiler on the Modcomp classic 7870 computer

    NASA Technical Reports Server (NTRS)

    Lytle, P. J.

    1981-01-01

    A brief history of the HAL/S language, including the experience of other users of the language at the Jet Propulsion Laboratory is presented. The current status of the compiler, as implemented on the Modcomp 7870 Classi computer, and future applications in the Deep Space Network (DSN) are discussed. The primary applications in the DSN will be in the Mark IVA network.

  11. Current status of the HAL/S compiler on the Modcomp classic 7870 computer

    NASA Technical Reports Server (NTRS)

    Lytle, P. J.

    1981-01-01

    A brief history of the HAL/S language, including the experience of other users of the language at the Jet Propulsion Laboratory is presented. The current status of the compiler, as implemented on the Modcomp 7870 Classi computer, and future applications in the Deep Space Network (DSN) are discussed. The primary applications in the DSN will be in the Mark IVA network.

  12. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  13. Mystic: Implementation of the Static Dynamic Optimal Control Algorithm for High-Fidelity, Low-Thrust Trajectory Design

    NASA Technical Reports Server (NTRS)

    Whiffen, Gregory J.

    2006-01-01

    Mystic software is designed to compute, analyze, and visualize optimal high-fidelity, low-thrust trajectories, The software can be used to analyze inter-planetary, planetocentric, and combination trajectories, Mystic also provides utilities to assist in the operation and navigation of low-thrust spacecraft. Mystic will be used to design and navigate the NASA's Dawn Discovery mission to orbit the two largest asteroids, The underlying optimization algorithm used in the Mystic software is called Static/Dynamic Optimal Control (SDC). SDC is a nonlinear optimal control method designed to optimize both 'static variables' (parameters) and dynamic variables (functions of time) simultaneously. SDC is a general nonlinear optimal control algorithm based on Bellman's principal.

  14. Implementation of an antimicrobial stewardship pathway with daptomycin for optimal treatment of methicillin-resistant Staphylococcus aureus bacteremia.

    PubMed

    Kullar, Ravina; Davis, Susan L; Kaye, Keith S; Levine, Donald P; Pogue, Jason M; Rybak, Michael J

    2013-01-01

    To evaluate a clinical pathway using daptomycin in patients with bacteremia caused by methicillin-resistant Staphylococcus aureus (MRSA) isolates exhibiting vancomycin minimum inhibitory concentrations (MICs) greater than 1 mg/L. Two-phase quasi-experimental study. Level I trauma center in Detroit, Michigan. The study population consisted of a total of 170 patients with MRSA bacteremia susceptible to vancomycin: 70 patients who had initial blood MRSA isolates exhibiting a vancomycin MIC > 1 mg/L and were treated with vancomycin were included in phase I (retrospective baseline period [2005-2007]) and 100 patients who were switched to daptomycin after initial vancomycin therapy according to the institutional MRSA bacteremia treatment pathway were included in phase II (the period after implementation of the treatment pathway [2008-2010]). The MRSA bacteremia treatment pathway was as follows: vancomycin therapy was initiated, optimizing target trough concentrations to 15-20 mg/L; for isolates demonstrating vancomycin MICs greater than 1 mg/L, therapy was switched to daptomycin, initiated at dosages of 6 mg/kg/day or higher. Infection characteristics, patient outcomes, and costs were evaluated. Patient characteristics were similar between the phase I and phase II groups. Phase II patients were more likely to achieve clinical success than were phase I patients (75.0% vs 41.4%, p<0.001). Phase II patients demonstrated a shorter total hospital length of stay and shorter durations of inpatient therapy, fever, and bacteremia. Treatment during phase I was independently associated with failure. Nine patients during phase I experienced nephrotoxicity, and two patients during phase II experienced increases in creatine kinase level. Costs were similar between phases I and II ($18,385 vs $19,755, p>0.05), although the hospital readmission rate was higher in phase I (33% vs 21%, p=0.08). Among the patients with bacteremia who had MRSA isolates that exhibited elevated

  15. A quantum logic network for implementing optimal symmetric universal and phase-covariant telecloning of a bipartite entangled state

    NASA Astrophysics Data System (ADS)

    Meng, Fanyu; Zhu, Aidong

    2008-10-01

    A quantum logic network to implement quantum telecloning is presented in this paper. The network includes two parts: the first part is used to create the telecloning channel and the second part to teleport the state. It can be used not only to implement universal telecloning for a bipartite entangled state which is completely unknown, but also to implement the phase-covariant telecloning for one that is partially known. Furthermore, the network can also be used to construct a tele-triplicator. It can easily be implemented in experiment because only single- and two-qubit operations are used in the network.

  16. Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers

    DOE PAGES

    Basu, Protonu; Williams, Samuel; Van Straalen, Brian; ...

    2017-04-05

    GPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model like CUDA, OpenCL, or OpenACC. Thus, in order to deliver portability across CPU-based and GPU-accelerated supercomputers, programmers are forced to write and maintain two versions of their applications or frameworks. In this paper, we explore the use of a compiler-based autotuning framework based on CUDA-CHiLL to deliver not only portability, but also performance portability across CPU- and GPU-accelerated platforms for the geometric multigrid linear solvers found inmore » many scientific applications. We also show that with autotuning we can attain near Roofline (a performance bound for a computation and target architecture) performance across the key operations in the miniGMG benchmark for both CPU- and GPU-based architectures as well as for a multiple stencil discretizations and smoothers. We show that our technology is readily interoperable with MPI resulting in performance at scale equal to that obtained via hand-optimized MPI+CUDA implementation.« less

  17. The Union3 Supernova Ia Compilation

    NASA Astrophysics Data System (ADS)

    Rubin, David; Aldering, Greg Scott; Amanullah, Rahman; Barbary, Kyle H.; Bruce, Adam; Chappell, Greta; Currie, Miles; Dawson, Kyle S.; Deustua, Susana E.; Doi, Mamoru; Fakhouri, Hannah; Fruchter, Andrew S.; Gibbons, Rachel A.; Goobar, Ariel; Hsiao, Eric; Huang, Xiaosheng; Ihara, Yutaka; Kim, Alex G.; Knop, Robert A.; Kowalski, Marek; Krechmer, Evan; Lidman, Chris; Linder, Eric; Meyers, Joshua; Morokuma, Tomoki; Nordin, Jakob; Perlmutter, Saul; Ripoche, Pascal; Rykoff, Eli S.; Saunders, Clare; Spadafora, Anthony L.; Suzuki, Nao; Takanashi, Naohiro; Yasuda, Naoki; Supernova Cosmology Project

    2015-01-01

    High-redshift supernovae observed with the Hubble Space Telescope (HST) are crucial for constraining any time variation in dark energy. In a forthcoming paper (Rubin+, in prep), we will present a cosmological analysis incorporating existing supernovae with improved calibrations, and new HST-observed supernovae. We combine these data with most of the world's current literature data, and fit using SALT2-4 to create the Union3 Supernova compilation. We present a new analysis framework that allows non-linear light-curve width and color corrections, direct modeling of color dispersion, and a redshift-dependent host-mass correction.

  18. Dual compile strategy for parallel heterogeneous execution.

    SciTech Connect

    Smith, Tyler Barratt; Perry, James Thomas

    2012-06-01

    The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work at the same time.

  19. Digital circuits for computer applications: A compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The innovations in this updated series of compilations dealing with electronic technology represent a carefully selected collection of digital circuits which have direct application in computer oriented systems. In general, the circuits have been selected as representative items of each section and have been included on their merits of having universal applications in digital computers and digital data processing systems. As such, they should have wide appeal to the professional engineer and scientist who encounter the fundamentals of digital techniques in their daily activities. The circuits are grouped as digital logic circuits, analog to digital converters, and counters and shift registers.

  20. HAL/S-360 compiler system specification

    NASA Technical Reports Server (NTRS)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  1. Piping and tubing technology: A compilation

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A compilation on the devices, techniques, and methods used in piping and tubing technology is presented. Data cover the following: (1) a number of fittings, couplings, and connectors that are useful in joining tubing and piping and various systems, (2) a family of devices used where flexibility and/or vibration damping are necessary, (3) a number of devices found useful in the regulation and control of fluid flow, and (4) shop hints to aid in maintenance and repair procedures such as cleaning, flaring, and swaging of tubes.

  2. A Framework for an Automated Compilation System for Reconfigurable Architectures

    DTIC Science & Technology

    1997-03-01

    C Source for a Simple Bit Reversal Program ............................................... 60 Figure 8: Optimized Assembly Code for Bit Reversal Loop...67 Figure 12: Source Code for a Software Function Identified for Hardware Implementation73...176 Figure 42: Source Code for the Dilation Filter in the IRMW Application ..................... 178 Figure 43: Source Code

  3. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  4. Clover: Compiler directed lightweight soft error resilience

    SciTech Connect

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; Tiwari, Devesh

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either the sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.

  5. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  6. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Agency Reports § 146.600 Semi-annual compilation. (a) The head of each agency shall collect and compile... Clerk of the House of Representatives. (h) Agencies shall keep the originals of all disclosure reports... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Semi-annual compilation. 146.600...

  7. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Agency Reports § 146.600 Semi-annual compilation. (a) The head of each agency shall collect and compile... Clerk of the House of Representatives. (h) Agencies shall keep the originals of all disclosure reports... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Semi-annual compilation. 146.600...

  8. 13 CFR 146.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Agency Reports § 146.600 Semi-annual compilation. (a) The head of each agency shall collect and compile... Clerk of the House of Representatives. (h) Agencies shall keep the originals of all disclosure reports... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Semi-annual compilation. 146.600...

  9. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B) and...

  10. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  11. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  12. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  13. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Semi-annual compilation. 227.600 Section 227.600 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT NEW RESTRICTIONS ON LOBBYING Agency Reports § 227.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the...

  14. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 2 2011-04-01 2009-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  15. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 2 2014-04-01 2014-04-01 false Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  16. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 2 2012-04-01 2009-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  17. 22 CFR 311.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 2 2013-04-01 2009-04-01 true Semi-annual compilation. 311.600 Section 311.600 Foreign Relations PEACE CORPS NEW RESTRICTIONS ON LOBBYING Agency Reports § 311.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the disclosure reports (see appendix B)...

  18. NPDES CAFO Regulations Implementation Status Reports

    EPA Pesticide Factsheets

    EPA compiles annual summaries on the implementation status of the NPDES CAFO regulations. Reports include, for each state: total number of CAFOs, number and percentage of CAFOs with NPDES permits, and other information associated with implementation of the

  19. Ada Compiler Validation Summary Report: Certificate Number: 890504W1. 10079 Hewlett Packard Co. HP 9000 Series 300 Ada Compiler, Version 4.35 HP 9000 Series 300 Model 370

    DTIC Science & Technology

    1989-05-04

    Consistent with the national laws of the originating country, the AVO may make full and free public disclosure of this report. In the United States, this...implementation and maintenance of the Ada compiler listed above, and agree to the public disclosure of the final Validation Summary Report. I further agree to... continue to comply with the Ada trademark policy, as defined by the Ada Joint Program Office. I declare that all of the Ada language compilers listed

  20. [Consciousness and the unconscious in self-regulation: the effects of conscious compilation on goal priming].

    PubMed

    Oikawa, Masanori; Oikawa, Haruka

    2010-12-01

    The present study explores the division of labor for consciousness and the unconscious by examining the effect that the conscious mental compilation of implementation intentions has on unconscious goal priming. Temptations (e.g., leisure activities) that compete with goals (e.g., to study) inhibit relevant goal pursuit. However, forming an implementation intention to pursue a goal without succumbing to temptations may set off automatic self-regulation based on renewed associations where activation of temptation triggers goal pursuit. An experiment with undergraduates (N=143) revealed that in the "no conscious compilation" control condition, goal priming facilitated and temptation priming inhibited subsequent task performance. However, in the "conscious compilation" condition, temptation priming facilitated subsequent task performance equally as much as goal priming did. These results are consistent with the notion that automatic goal pursuit in the direction counter to existing mental associations could be achieved following conscious compilation of implementation intentions. Implications of these findings for effective coordination of consciousness and the unconscious in self-regulation are discussed.

  1. A simple way to build an ANSI-C like compiler from scratch and embed it on the instrument's software

    NASA Astrophysics Data System (ADS)

    Rodríguez Trinidad, Alicia; Morales Muñoz, Rafael; Abril Martí, Miguel; Costillo Iciarra, Luis Pedro; Cárdenas Vázquez, M. C.; Rabaza Castillo, Ovidio; Ramón Ballesta, Alejandro; Sánchez Carrasco, Miguel A.; Becerril Jarque, Santiago; Amado González, Pedro J.

    2010-07-01

    This paper examines the reasons for building a compiled language embedded on an instrument software. Starting from scratch and step by step, all the compiler stages of an ANSI-C like language are analyzed, simplified and implemented. The result is a compiler and a runner with a small footprint that can be easily transferable and embedded into an instrument software. Both have about 75 KBytes when similar solutions have hundreds. Finally, the possibilities that arise from embedding the runner inside an instrument software are explored.

  2. Affordable Design: A Methodolgy to Implement Process-Based Manufacturing Cost into the Traditional Performance-Focused Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Bao, Han P.; Samareh, J. A.

    2000-01-01

    The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.

  3. Automatic compilation from high-level biologically-oriented programming language to genetic regulatory networks.

    PubMed

    Beal, Jacob; Lu, Ting; Weiss, Ron

    2011-01-01

    The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes (~ 50%) and latency of the optimized engineered gene networks. Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.

  4. Impact of Definitions of FIA Variables and Compilation Procedures on Inventory Compilation Results in Georgia

    Treesearch

    Brock Stewart; Chris J. Cieszewski; Michal Zasada

    2005-01-01

    This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...

  5. The Union3 Supernova Ia Compilation

    NASA Astrophysics Data System (ADS)

    Rubin, David; Aldering, Greg Scott; Amanullah, Rahman; Barbary, Kyle H.; Bruce, Adam; Chappell, Greta; Currie, Miles; Dawson, Kyle S.; Deustua, Susana E.; Doi, Mamoru; Fakhouri, Hannah; Fruchter, Andrew S.; Gibbons, Rachel A.; Goobar, Ariel; Hsiao, Eric; Huang, Xiaosheng; Ihara, Yutaka; Kim, Alex G.; Knop, Robert A.; Kowalski, Marek; Krechmer, Evan; Lidman, Chris; Linder, Eric; Meyers, Joshua; Morokuma, Tomoki; Nordin, Jakob; Perlmutter, Saul; Ripoche, Pascal; Ruiz-Lapuente, Pilar; Rykoff, Eli S.; Saunders, Clare; Spadafora, Anthony L.; Suzuki, Nao; Takanashi, Naohiro; Yasuda, Naoki; Supernova Cosmology Project

    2016-01-01

    High-redshift supernovae observed with the Hubble Space Telescope (HST) are crucial for constraining any time variation in dark energy. In a forthcoming paper (Rubin+, in prep), we will present a cosmological analysis incorporating existing supernovae with improved calibrations, and new HST-observed supernovae (six above z=1). We combine these data with current literature data, and fit them using SALT2-4 to create the Union3 Supernova compilation. We build on the Unified Inference for Type Ia cosmologY (UNITY) framework (Rubin+ 2015b), incorporating non-linear light-curve width and color relations, a model for unexplained dispersion, an outlier model, and a redshift-dependent host-mass correction.

  6. Compilation of requests for nuclear data

    SciTech Connect

    Weston, L.W.; Larson, D.C.

    1993-02-01

    This compilation represents the current needs for nuclear data measurements and evaluations as expressed by interested fission and fusion reactor designers, medical users of nuclear data, nuclear data evaluators, CSEWG members and other interested parties. The requests and justifications are reviewed by the Data Request and Status Subcommittee of CSEWG as well as most of the general CSEWG membership. The basic format and computer programs for the Request List were produced by the National Nuclear Data Center (NNDC) at Brookhaven National Laboratory. The NNDC produced the Request List for many years. The Request List is compiled from a computerized data file. Each request has a unique isotope, reaction type, requestor and identifying number. The first two digits of the identifying number are the year in which the request was initiated. Every effort has been made to restrict the notations to those used in common nuclear physics textbooks. Most requests are for individual isotopes as are most ENDF evaluations, however, there are some requests for elemental measurements. Each request gives a priority rating which will be discussed in Section 2, the neutron energy range for which the request is made, the accuracy requested in terms of one standard deviation, and the requested energy resolution in terms of one standard deviation. Also given is the requestor with the comments which were furnished with the request. The addresses and telephone numbers of the requestors are given in Appendix 1. ENDF evaluators who may be contacted concerning evaluations are given in Appendix 2. Experimentalists contemplating making one of the requested measurements are encouraged to contact both the requestor and evaluator who may provide valuable information. This is a working document in that it will change with time. New requests or comments may be submitted to the editors or a regular CSEWG member at any time.

  7. Compilation of requests for nuclear data

    SciTech Connect

    Weston, L.W.; Larson, D.C.

    1993-02-01

    This compilation represents the current needs for nuclear data measurements and evaluations as expressed by interested fission and fusion reactor designers, medical users of nuclear data, nuclear data evaluators, CSEWG members and other interested parties. The requests and justifications are reviewed by the Data Request and Status Subcommittee of CSEWG as well as most of the general CSEWG membership. The basic format and computer programs for the Request List were produced by the National Nuclear Data Center (NNDC) at Brookhaven National Laboratory. The NNDC produced the Request List for many years. The Request List is compiled from a computerized data file. Each request has a unique isotope, reaction type, requestor and identifying number. The first two digits of the identifying number are the year in which the request was initiated. Every effort has been made to restrict the notations to those used in common nuclear physics textbooks. Most requests are for individual isotopes as are most ENDF evaluations, however, there are some requests for elemental measurements. Each request gives a priority rating which will be discussed in Section 2, the neutron energy range for which the request is made, the accuracy requested in terms of one standard deviation, and the requested energy resolution in terms of one standard deviation. Also given is the requestor with the comments which were furnished with the request. The addresses and telephone numbers of the requestors are given in Appendix 1. ENDF evaluators who may be contacted concerning evaluations are given in Appendix 2. Experimentalists contemplating making one of the requested measurements are encouraged to contact both the requestor and evaluator who may provide valuable information. This is a working document in that it will change with time. New requests or comments may be submitted to the editors or a regular CSEWG member at any time.

  8. An Innovative Compiler For Programming And Designing Real-Time Signal Processors

    NASA Astrophysics Data System (ADS)

    Petruschka, Orni; Torng, H. C.

    1986-04-01

    Real time signal processing tasks impose stringent requirements on computing systems. One approach to satisfying these demands is to employ intelligently interconnected multiple arithmetic units, such as multipliers, adders, logic units and others, to implement concurrent computations. Two problems emerge: 1) Programming: Programs with wide instruction words have to be developed to exercise the multiple arithmetic units fully and efficiently to meet the real-time processing loads; 2) Design: With a given set of real-time signal processing tasks, design procedures are needed to specify multiple arithmetic units and their interconnection schemes for the processor. This paper presents a compiler which provides a solution to the programming and design problems. The compiler that has been developed translates blocks of RISC-like instructions into programs of wide microinstructions; each of these microinstructions initiates many concurrently executable operations. In so doing, we seek to achieve the maximum utilization of execution resources and to complete processing tasks in minimum time. The compiler is based on an innovative "Dispatch Stack" concept, and has been applied to program Floating Point System(FPS) processors; the resulting program for computing inner-product and other signal processing tasks are as good as those obtained by laborious hand-compilation. We will then show that the compiler developed for programming can be used advantageously to design real-time signal processing systems with multiple arithmetic units.

  9. ROSE: Compiler Support for Object-Oriented Frameworks

    SciTech Connect

    Qainlant, D.

    1999-11-17

    ROSE is a preprocessor generation tool for the support of compile time performance optimizations in Overture. The Overture framework is an object-oriented environment for solving partial differential equations in two and three space dimensions. It is a collection of C++ libraries that enables the use of finite difference and finite volume methods at a level that hides the details of the associated data structures. Overture can be used to solve problems in complicated, moving geometries using the method of overlapping grids. It has support for grid generation, difference operators, boundary conditions, database access and graphics. In this paper we briefly present Overture, and discuss our approach toward performance within Overture and the A++P++ array class abstractions upon which Overture depends, this work represents some of the newest work in Overture. The results we present show that the abstractions represented within Overture and the A++P++ array class library can be used to obtain application codes with performance equivalent to that of optimized C and Fortran 77. ROSE, the preprocessor generation tool, is general in its application to any object-oriented framework or application and is not specific to Overture.

  10. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  11. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB.

    PubMed

    Biyikli, Emre; To, Albert C

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org.

  12. Design and implementation of fuzzy-PD controller based on relation models: A cross-entropy optimization approach

    NASA Astrophysics Data System (ADS)

    Anisimov, D. N.; Dang, Thai Son; Banerjee, Santo; Mai, The Anh

    2017-07-01

    In this paper, an intelligent system use fuzzy-PD controller based on relation models is developed for a two-wheeled self-balancing robot. Scaling factors of the fuzzy-PD controller are optimized by a Cross-Entropy optimization method. A linear Quadratic Regulator is designed to bring a comparison with the fuzzy-PD controller by control quality parameters. The controllers are ported and run on STM32F4 Discovery Kit based on the real-time operating system. The experimental results indicate that the proposed fuzzy-PD controller runs exactly on embedded system and has desired performance in term of fast response, good balance and stabilize.

  13. An integrated runtime and compile-time approach for parallelizing structured and block structured applications

    NASA Technical Reports Server (NTRS)

    Agrawal, Gagan; Sussman, Alan; Saltz, Joel

    1993-01-01

    Scientific and engineering applications often involve structured meshes. These meshes may be nested (for multigrid codes) and/or irregularly coupled (called multiblock or irregularly coupled regular mesh problems). A combined runtime and compile-time approach for parallelizing these applications on distributed memory parallel machines in an efficient and machine-independent fashion was described. A runtime library which can be used to port these applications on distributed memory machines was designed and implemented. The library is currently implemented on several different systems. To further ease the task of application programmers, methods were developed for integrating this runtime library with compilers for HPK-like parallel programming languages. How this runtime library was integrated with the Fortran 90D compiler being developed at Syracuse University is discussed. Experimental results to demonstrate the efficacy of our approach are presented. A multiblock Navier-Stokes solver template and a multigrid code were experimented with. Our experimental results show that our primitives have low runtime communication overheads. Further, the compiler parallelized codes perform within 20 percent of the code parallelized by manually inserting calls to the runtime library.

  14. OpenARC: Extensible OpenACC Compiler Framework for Directive-Based Accelerator Programming Study

    SciTech Connect

    Lee, Seyong; Vetter, Jeffrey S

    2014-01-01

    Directive-based, accelerator programming models such as OpenACC have arisen as an alternative solution to program emerging Scalable Heterogeneous Computing (SHC) platforms. However, the increased complexity in the SHC systems incurs several challenges in terms of portability and productivity. This paper presents an open-sourced OpenACC compiler, called OpenARC, which serves as an extensible research framework to address those issues in the directive-based accelerator programming. This paper explains important design strategies and key compiler transformation techniques needed to implement the reference OpenACC compiler. Moreover, this paper demonstrates the efficacy of OpenARC as a research framework for directive-based programming study, by proposing and implementing OpenACC extensions in the OpenARC framework to 1) support hybrid programming of the unified memory and separate memory and 2) exploit architecture-specific features in an abstract manner. Porting thirteen standard OpenACC programs and three extended OpenACC programs to CUDA GPUs shows that OpenARC performs similarly to a commercial OpenACC compiler, while it serves as a high-level research framework.

  15. Non-vitamin K antagonist oral anticoagulants and atrial fibrillation guidelines in practice: barriers to and strategies for optimal implementation.

    PubMed

    Camm, A John; Pinto, Fausto J; Hankey, Graeme J; Andreotti, Felicita; Hobbs, F D Richard

    2015-07-01

    Stroke is a leading cause of morbidity and mortality worldwide. Atrial fibrillation (AF) is an independent risk factor for stroke, increasing the risk five-fold. Strokes in patients with AF are more likely than other embolic strokes to be fatal or cause severe disability and are associated with higher healthcare costs, but they are also preventable. Current guidelines recommend that all patients with AF who are at risk of stroke should receive anticoagulation. However, despite this guidance, registry data indicate that anticoagulation is still widely underused. With a focus on the 2012 update of the European Society of Cardiology (ESC) guidelines for the management of AF, the Action for Stroke Prevention alliance writing group have identified key reasons for the suboptimal implementation of the guidelines at a global, regional, and local level, with an emphasis on access restrictions to guideline-recommended therapies. Following identification of these barriers, the group has developed an expert consensus on strategies to augment the implementation of current guidelines, including practical, educational, and access-related measures. The potential impact of healthcare quality measures for stroke prevention on guideline implementation is also explored. By providing practical guidance on how to improve implementation of the ESC guidelines, or region-specific modifications of these guidelines, the aim is to reduce the potentially devastating impact that stroke can have on patients, their families and their carers.

  16. Non-vitamin K antagonist oral anticoagulants and atrial fibrillation guidelines in practice: barriers to and strategies for optimal implementation

    PubMed Central

    Camm, A. John; Pinto, Fausto J.; Hankey, Graeme J.; Andreotti, Felicita; Hobbs, F.D. Richard

    2015-01-01

    Stroke is a leading cause of morbidity and mortality worldwide. Atrial fibrillation (AF) is an independent risk factor for stroke, increasing the risk five-fold. Strokes in patients with AF are more likely than other embolic strokes to be fatal or cause severe disability and are associated with higher healthcare costs, but they are also preventable. Current guidelines recommend that all patients with AF who are at risk of stroke should receive anticoagulation. However, despite this guidance, registry data indicate that anticoagulation is still widely underused. With a focus on the 2012 update of the European Society of Cardiology (ESC) guidelines for the management of AF, the Action for Stroke Prevention alliance writing group have identified key reasons for the suboptimal implementation of the guidelines at a global, regional, and local level, with an emphasis on access restrictions to guideline-recommended therapies. Following identification of these barriers, the group has developed an expert consensus on strategies to augment the implementation of current guidelines, including practical, educational, and access-related measures. The potential impact of healthcare quality measures for stroke prevention on guideline implementation is also explored. By providing practical guidance on how to improve implementation of the ESC guidelines, or region-specific modifications of these guidelines, the aim is to reduce the potentially devastating impact that stroke can have on patients, their families and their carers. PMID:26116685

  17. Qcompiler: Quantum compilation with the CSD method

    NASA Astrophysics Data System (ADS)

    Chen, Y. G.; Wang, J. B.

    2013-03-01

    In this paper, we present a general quantum computation compiler, which maps any given quantum algorithm to a quantum circuit consisting a sequential set of elementary quantum logic gates based on recursive cosine-sine decomposition. The resulting quantum circuit diagram is provided by directly linking the package output written in LaTeX to Qcircuit.tex . We illustrate the use of the Qcompiler package through various examples with full details of the derived quantum circuits. Besides its accuracy, generality and simplicity, Qcompiler produces quantum circuits with significantly reduced number of gates when the systems under study have a high degree of symmetry. Catalogue identifier: AENX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4321 No. of bytes in distributed program, including test data, etc.: 50943 Distribution format: tar.gz Programming language: Fortran. Computer: Any computer with a Fortran compiler. Operating system: Linux, Mac OS X 10.5 (and later). RAM: Depends on the size of the unitary matrix to be decomposed Classification: 4.15. External routines: Lapack (http://www.netlib.org/lapack/) Nature of problem: Decompose any given unitary operation into a quantum circuit with only elementary quantum logic gates. Solution method: This package decomposes an arbitrary unitary matrix, by applying the CSD algorithm recursively, into a series of block-diagonal matrices, which can then be readily associated with elementary quantum gates to form a quantum circuit. Restrictions: The only limitation is imposed by the available memory on the user's computer. Additional comments: This package is applicable for any arbitrary unitary matrices, both real and complex. If the

  18. Compiling quantum algorithms for architectures with multi-qubit gates

    NASA Astrophysics Data System (ADS)

    Martinez, Esteban A.; Monz, Thomas; Nigg, Daniel; Schindler, Philipp; Blatt, Rainer

    2016-06-01

    In recent years, small-scale quantum information processors have been realized in multiple physical architectures. These systems provide a universal set of gates that allow one to implement any given unitary operation. The decomposition of a particular algorithm into a sequence of these available gates is not unique. Thus, the fidelity of the implementation of an algorithm can be increased by choosing an optimized decomposition into available gates. Here, we present a method to find such a decomposition, where a small-scale ion trap quantum information processor is used as an example. We demonstrate a numerical optimization protocol that minimizes the number of required multi-qubit entangling gates by design. Furthermore, we adapt the method for state preparation, and quantum algorithms including in-sequence measurements.

  19. On-line re-optimization of prostate IMRT plan for adaptive radiation therapy: A feasibility study and implementation

    NASA Astrophysics Data System (ADS)

    Thongphiew, Danthai

    Prostate cancer is a disease that affected approximately 200,000 men in United States in 2006. Radiation therapy is a non invasive treatment option for this disease and is highly effective. The goal of radiation therapy is to deliver the prescription dose to the tumor (prostate) while sparing the surrounding healthy organs (e.g. bladder, rectum, and femoral heads). One limitation of the radiation therapy is organ position and shape variation from day to day. These variations could be as large as half inch. The conventional solution to this problem is to include some margins surrounding the target when plan the treatment. The development of image guided radiation therapy technique allows in-room correction which potentially eliminates the patient setup error however the uncertainty due to organ deformation still remains. Performing a plan re-optimization will take about half hour which is infeasible to perform an online correction. A technique of performing online re-optimization of intensity modulated radiation therapy is developed for adaptive radiation therapy of prostate cancer. The technique is capable of correction both organ positioning and shape changes within a few minutes. The proposed technique involves (1) 3D on-board imaging of daily anatomy, (2) registering the daily images with original planning CT images and mapping the original dose distribution to the daily anatomy, (3) real time re-optimization of the plan. Finally the leaf sequences are calculated for the treatment delivery. The feasibility of this online adaptive radiation therapy scheme was evaluated by clinical cases. The results demonstrate that it is feasible to perform online re-optimization of the original plan when large position or shape variation occurs.

  20. Implementation of a genetically tuned neural platform in optimizing fluorescence from receptor-ligand binding interactions on microchips.

    PubMed

    Alvarado, Judith; Hanrahan, Grady; Nguyen, Huong T H; Gomez, Frank A

    2012-09-01

    This paper describes the use of a genetically tuned neural network platform to optimize the fluorescence realized upon binding 5-carboxyfluorescein-D-Ala-D-Ala-D-Ala (5-FAM-(D-Ala)(3) ) (1) to the antibiotic teicoplanin from Actinoplanes teichomyceticus electrostatically attached to a microfluidic channel originally modified with 3-aminopropyltriethoxysilane. Here, three parameters: (i) the length of time teicoplanin was in the microchannel; (ii) the length of time 1 was in the microchannel, thereby, in equilibrium with teicoplanin, and; (iii) the amount of time buffer was flushed through the microchannel to wash out any unbound 1 remaining in the channel, are examined at a constant concentration of 1, with neural network methodology applied to optimize fluorescence. Optimal neural structure provided a best fit model, both for the training set (r(2) = 0.985) and testing set (r(2) = 0.967) data. Simulated results were experimentally validated demonstrating efficiency of the neural network approach and proved superior to the use of multiple linear regression and neural networks using standard back propagation.

  1. A new algorithm for ccomputing theory prime implicates compilations

    SciTech Connect

    Marquis, P.; Sadaoui, S.

    1996-12-31

    We present a new algorithm (called TPI/BDD) for computing the theory prime implicates compilation of a knowledge base {Sigma}. In contrast to many compilation algorithms, TPI/BDD does not require the prime implicates of {Sigma} to be generated. Since their number can easily be exponential in the size of {Sigma}, TPI/BDD can save a lot of computing. Thanks to TPI/BDD, we can now conceive of compiling knowledge bases impossible to before.

  2. Ground Operations Aerospace Language (GOAL). Volume 2: Compiler

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.

  3. Fringe pattern demodulation using the one-dimensional continuous wavelet transform: field-programmable gate array implementation.

    PubMed

    Abid, Abdulbasit

    2013-03-01

    This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.

  4. Roughness parameter optimization using Land Parameter Retrieval Model and Soil Moisture Deficit: Implementation using SMOS brightness temperatures

    NASA Astrophysics Data System (ADS)

    Srivastava, Prashant K.; O'Neill, Peggy; Han, Dawei; Rico-Ramirez, Miguel A.; Petropoulos, George P.; Islam, Tanvir; Gupta, Manika

    2015-04-01

    Roughness parameterization is necessary for nearly all soil moisture retrieval algorithms such as single or dual channel algorithms, L-band Microwave Emission of Biosphere (LMEB), Land Parameter Retrieval Model (LPRM), etc. At present, roughness parameters can be obtained either by field experiments, although obtaining field measurements all over the globe is nearly impossible, or by using a land cover-based look up table, which is not always accurate everywhere for individual fields. From a catalogue of models available in the technical literature domain, the LPRM model was used here because of its robust nature and applicability to a wide range of frequencies. LPRM needs several parameters for soil moisture retrieval -- in particular, roughness parameters (h and Q) are important for calculating reflectivity. In this study, the h and Q parameters are optimized using the soil moisture deficit (SMD) estimated from the probability distributed model (PDM) and Soil Moisture and Ocean Salinity (SMOS) brightness temperatures following the Levenberg-Marquardt (LM) algorithm over the Brue catchment, Southwest of England, U.K.. The catchment is predominantly a pasture land with moderate topography. The PDM-based SMD is used as it is calibrated and validated using locally available ground-based information, suitable for large scale areas such as catchments. The optimal h and Q parameters are determined by maximizing the correlation between SMD and LPRM retrieved soil moisture. After optimization the values of h and Q have been found to be 0.32 and 0.15, respectively. For testing the usefulness of the estimated roughness parameters, a separate set of SMOS datasets are taken into account for soil moisture retrieval using the LPRM model and optimized roughness parameters. The overall analysis indicates a satisfactory result when compared against the SMD information. This work provides quantitative values of roughness parameters suitable for large scale applications. The

  5. Model compilation for real-time planning and diagnosis with feedback

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2005-01-01

    This paper describes MEXEC, an implemented micro executive that compiles a device model that can have feedback into a structure for subsequent evaluation. This system computes both the most likely current device mode from n sets of sensor measurements and the n-1 step reconfiguration plan that is most likely to result in reaching a target mode - if such a plan exists. A user tunes the system by increasing n to improve system capability at the cost of real-time performance.

  6. Basic circuit compilation techniques for an ion-trap quantum machine

    NASA Astrophysics Data System (ADS)

    Maslov, Dmitri

    2017-02-01

    We study the problem of compilation of quantum algorithms into optimized physical-level circuits executable in a quantum information processing (QIP) experiment based on trapped atomic ions. We report a complete strategy: starting with an algorithm in the form of a quantum computer program, we compile it into a high-level logical circuit that goes through multiple stages of decomposition into progressively lower-level circuits until we reach the physical execution-level specification. We skip the fault-tolerance layer, as it is not within the scope of this work. The different stages are structured so as to best assist with the overall optimization while taking into account numerous optimization criteria, including minimizing the number of expensive two-qubit gates, minimizing the number of less expensive single-qubit gates, optimizing the runtime, minimizing the overall circuit error, and optimizing classical control sequences. Our approach allows a trade-off between circuit runtime and quantum error, as well as to accommodate future changes in the optimization criteria that may likely arise as a result of the anticipated improvements in the physical-level control of the experiment.

  7. Route Optimization for Mobile IPV6 Using the Return Routability Procedure: Test Bed Implementation and Security Analysis

    DTIC Science & Technology

    2007-03-01

    Linux [http://www.mipl.mediapoli.com/ Last visited on January 10, 2007]), “ KAME ” project (Mobile IPv6 for BSD based Oss [http://www.kame.net Last...conformance testing events such as the ETSI IPv6 Plugtests and TAHI Interoperability events. The " KAME " and "USAGI", projects are working on research...and development on the implementation of the IPv6 and IPsec protocols, which operates on BSD based OSs for the " KAME " project and on a Linux based

  8. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    SciTech Connect

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, used for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.

  9. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  10. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    SciTech Connect

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, used for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.

  11. Designing a strategy to implement optimal conservative treatments in patients with knee or hip osteoarthritis in orthopedic practice: a study protocol of the BART-OP study

    PubMed Central

    2014-01-01

    Background National and international evidence-based guidelines for hip and knee osteoarthritis recommend to start with (a combination of) conservative treatments, followed by surgical intervention if a patient does not respond sufficiently to conservative treatment options. Despite these recommendations, there are strong indications that conservative treatments are not optimally used in orthopedic practice. Our study aims to quantify the use of conservative treatments in Dutch orthopedic practice and to explore the barriers and facilitators for the use of conservative treatments that should be taken into account in a strategy to improve the embedding of conservative treatments in hip and knee osteoarthritis in orthopedic practice. Methods This study consists of three phases. First, current use of conservative treatments in patients with hip and knee osteoarthritis will be explored using an internet-based survey among at least 100 patients to identify the underused conservative treatments. Second, barriers and facilitators for the use of conservative treatments in orthopedic practice will be identified using semi-structured interviews among 10 orthopedic surgeons and 5 patients. The interviews will be followed by an internet-based survey among approximately 450 orthopedic surgeons and at least 100 patients in which the identified barriers and facilitators will be ranked by importance. Finally, an implementation strategy will be developed based on the results of the previous phases using intervention mapping. Discussion The developed strategy is likely to result in an optimal and standardized use of conservative treatment options in hip and knee osteoarthritis in orthopedic practice, because it is focused on identified barriers and facilitators. In addition, the results of this study can be used as an example for optimizing the use of conservative care in other patient groups. In a subsequent study, the developed implementation strategy will be assessed on its

  12. Optimization environments and the NEOS server

    SciTech Connect

    Gropp, W.; More, J.J.

    1997-03-01

    The authors are interested in the development of problem-solving environments that simplify the formulation of optimization problems, and the access to computational resources. Once the problem has been formulated, the first step in solving an optimization problem in a typical computational environment is to identify and obtain the appropriate piece of optimization software. Once the software has been installed and tested in the local environment, the user must read the documentation and write code to define the optimization problem in the manner required by the software. Typically, Fortran or C code must be written to define the problem, compute function values and derivatives, and specify sparsity patterns. Finally, the user must debug, compile, link, and execute the code. The Network-Enabled Optimization System (NEOS) is an Internet-based service for optimization providing information, software, and problem-solving services for optimization. The main components of NEOS are the NEOS Guide and the NEOS Server. The current version of the NEOS Server is described in Section 2. The authors emphasize nonlinear optimization problems, but NEOS does handle linear and nonlinearly constrained optimization problems, and solvers for optimization problems subject to integer variables are being added. In Section 4 the authors begin to explore possible extensions to the NEOS Server by discussing the addition of solvers for global optimization problems. Section 5 discusses how a remote procedure call (RPC) interface to NEOS addresses some of the limitations of NEOS in the areas of security and usability. The detailed implementation of such an interface raises a number of questions, such as exactly how the RPC is implemented, what security or authentication approaches are used, and what techniques are used to improve the efficiency of the communication. They outline some of the issues in network computing that arise from the emerging style of computing used by NEOS.

  13. Implementation and performance of the pseudoknot problem in sisal

    SciTech Connect

    Feo, J.; Ivory, M.

    1994-12-01

    The Pseudoknot Problem is an application from molecular biology that computes all possible three-dimensional structures of one section of a nucleic acid molecule. The problem spans two important application domains: it includes a deterministic, backtracking search algorithm and floating-point intensive computations. Recently, the application has been used to compare and to contrast functional languages. In this paper, we describe a sequential and parallel implementation of the problem in Sisal. We present a method for writing recursive, floating-point intensive applications in Sisal that preserves performance and parallelism. We discuss compiler optimizations, runtime execution, and performance on several multiprocessor systems.

  14. SU-E-T-500: Initial Implementation of GPU-Based Particle Swarm Optimization for 4D IMRT Planning in Lung SBRT

    SciTech Connect

    Modiri, A; Hagan, A; Gu, X; Sawant, A

    2015-06-15

    Purpose 4D-IMRT planning, combined with dynamic MLC tracking delivery, utilizes the temporal dimension as an additional degree of freedom to achieve improved OAR-sparing. The computational complexity for such optimization increases exponentially with increase in dimensionality. In order to accomplish this task in a clinically-feasible time frame, we present an initial implementation of GPU-based 4D-IMRT planning based on particle swarm optimization (PSO). Methods The target and normal structures were manually contoured on ten phases of a 4DCT scan of a NSCLC patient with a 54cm3 right-lower-lobe tumor (1.5cm motion). Corresponding ten 3D-IMRT plans were created in the Eclipse treatment planning system (Ver-13.6). A vendor-provided scripting interface was used to export 3D-dose matrices corresponding to each control point (10 phases × 9 beams × 166 control points = 14,940), which served as input to PSO. The optimization task was to iteratively adjust the weights of each control point and scale the corresponding dose matrices. In order to handle the large amount of data in GPU memory, dose matrices were sparsified and placed in contiguous memory blocks with the 14,940 weight-variables. PSO was implemented on CPU (dual-Xeon, 3.1GHz) and GPU (dual-K20 Tesla, 2496 cores, 3.52Tflops, each) platforms. NiftyReg, an open-source deformable image registration package, was used to calculate the summed dose. Results The 4D-PSO plan yielded PTV coverage comparable to the clinical ITV-based plan and significantly higher OAR-sparing, as follows: lung Dmean=33%; lung V20=27%; spinal cord Dmax=26%; esophagus Dmax=42%; heart Dmax=0%; heart Dmean=47%. The GPU-PSO processing time for 14940 variables and 7 PSO-particles was 41% that of CPU-PSO (199 vs. 488 minutes). Conclusion Truly 4D-IMRT planning can yield significant OAR dose-sparing while preserving PTV coverage. The corresponding optimization problem is large-scale, non-convex and computationally rigorous. Our initial results

  15. An optimal scheme for numerical evaluation of Eshelby tensors and its implementation in a MATLAB package for simulating the motion of viscous ellipsoids in slow flows

    NASA Astrophysics Data System (ADS)

    Qu, Mengmeng; Jiang, Dazhi; Lu, Lucy X.

    2016-11-01

    To address the multiscale deformation and fabric development in Earth's ductile lithosphere, micromechanics-based self-consistent homogenization is commonly used to obtain macroscale rheological properties from properties of constituent elements. The homogenization is heavily based on the solution of an Eshelby viscous inclusion in a linear viscous medium and the extension of the solution to nonlinear viscous materials. The homogenization requires repeated numerical evaluation of Eshelby tensors for constituent elements and becomes ever more computationally challenging as the elements are deformed to more elongate or flattened shapes. In this paper, we develop an optimal scheme for evaluating Eshelby tensors, using a combination of a product Gaussian quadrature and the Lebedev quadrature. We first establish, through numerical experiments, an empirical relationship between the inclusion shape and the computational time it takes to evaluate its Eshelby tensors. We then use the relationship to develop an optimal scheme for selecting the most efficient quadrature to obtain the Eshelby tensors. The optimal scheme is applicable to general homogenizations. In this paper, it is implemented in a MATLAB package for investigating the evolution of solitary rigid or deformable inclusions and the development of shape preferred orientations in multi-inclusion systems during deformation. The MATLAB package, upgrading an earlier effort written in MathCad, can be downloaded online.

  16. Soil erosion evaluation in a rapidly urbanizing city (Shenzhen, China) and implementation of spatial land-use optimization.

    PubMed

    Zhang, Wenting; Huang, Bo

    2015-03-01

    Soil erosion has become a pressing environmental concern worldwide. In addition to such natural factors as slope, rainfall, vegetation cover, and soil characteristics, land-use changes-a direct reflection of human activities-also exert a huge influence on soil erosion. In recent years, such dramatic changes, in conjunction with the increasing trend toward urbanization worldwide, have led to severe soil erosion. Against this backdrop, geographic information system-assisted research on the effects of land-use changes on soil erosion has become increasingly common, producing a number of meaningful results. In most of these studies, however, even when the spatial and temporal effects of land-use changes are evaluated, knowledge of how the resulting data can be used to formulate sound land-use plans is generally lacking. At the same time, land-use decisions are driven by social, environmental, and economic factors and thus cannot be made solely with the goal of controlling soil erosion. To address these issues, a genetic algorithm (GA)-based multi-objective optimization (MOO) approach has been proposed to find a balance among various land-use objectives, including soil erosion control, to achieve sound land-use plans. GA-based MOO offers decision-makers and land-use planners a set of Pareto-optimal solutions from which to choose. Shenzhen, a fast-developing Chinese city that has long suffered from severe soil erosion, is selected as a case study area to validate the efficacy of the GA-based MOO approach for controlling soil erosion. Based on the MOO results, three multiple land-use objectives are proposed for Shenzhen: (1) to minimize soil erosion, (2) to minimize the incompatibility of neighboring land-use types, and (3) to minimize the cost of changes to the status quo. In addition to these land-use objectives, several constraints are also defined: (1) the provision of sufficient built-up land to accommodate a growing population, (2) restrictions on the development of

  17. Cross-Compiler for Modeling Space-Flight Systems

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    Ripples is a computer program that makes it possible to specify arbitrarily complex space-flight systems in an easy-to-learn, high-level programming language and to have the specification automatically translated into LibSim, which is a text-based computing language in which such simulations are implemented. LibSim is a very powerful simulation language, but learning it takes considerable time, and it requires that models of systems and their components be described at a very low level of abstraction. To construct a model in LibSim, it is necessary to go through a time-consuming process that includes modeling each subsystem, including defining its fault-injection states, input and output conditions, and the topology of its connections to other subsystems. Ripples makes it possible to describe the same models at a much higher level of abstraction, thereby enabling the user to build models faster and with fewer errors. Ripples can be executed in a variety of computers and operating systems, and can be supplied in either source code or binary form. It must be run in conjunction with a Lisp compiler.

  18. Compiling Utility Requirements For New Nuclear Power Plant Project

    SciTech Connect

    Patrakka, Eero

    2002-07-01

    Teollisuuden Voima Oy (TVO) submitted in November 2000 to the Finnish Government an application for a Decision-in-Principle concerning the construction of a new nuclear power plant in Finland. The actual investment decision can be made first after a positive decision has been made by the Government and the Parliament. Parallel to the licensing process, technical preparedness has been upheld so that the procurement process can be commenced without delay, when needed. This includes the definition of requirements for the plant and preliminary preparation of bid inquiry specifications. The core of the technical requirements corresponds to the specifications presented in the European Utility Requirement (EUR) document, compiled by major European electricity producers. Quite naturally, an amount of modifications to the EUR document are needed that take into account the country- and site-specific conditions as well as the experiences gained in the operation of the existing NPP units. Along with the EUR-related requirements concerning the nuclear island and power generation plant, requirements are specified for scope of supply as well as for a variety of issues related to project implementation. (author)

  19. Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language

    NASA Astrophysics Data System (ADS)

    Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud

    2017-08-01

    This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.

  20. Implementing focal-plane phase masks optimized for real telescope apertures with SLM-based digital adaptive coronagraphy.

    PubMed

    Kühn, Jonas; Patapis, Polychronis; Ruane, Garreth; Lu, Xin

    2017-07-10

    Direct imaging of exoplanets or circumstellar disk material requires extreme contrast at the 10(-6) to 10(-12) levels at < 100 mas angular separation from the star. Focal-plane mask (FPM) coronagraphic imaging has played a key role in this field, taking advantage of progress in Adaptive Optics on ground-based 8 + m class telescopes. However, large telescope entrance pupils usually consist of complex, sometimes segmented, non-ideal apertures, which include a central obstruction for the secondary mirror and its support structure. In practice, this negatively impacts wavefront quality and coronagraphic performance, in terms of achievable contrast and inner working angle. Recent theoretical works on structured darkness have shown that solutions for FPM phase profiles, optimized for non-ideal apertures, can be numerically derived. Here we present and discuss a first experimental validation of this concept, using reflective liquid crystal spatial light modulators as adaptive FPM coronagraphs.

  1. An optimized DSP implementation of adaptive filtering and ICA for motion artifact reduction in ambulatory ECG monitoring.

    PubMed

    Berset, Torfinn; Geng, Di; Romero, Iñaki

    2012-01-01

    Noise from motion artifacts is currently one of the main challenges in the field of ambulatory ECG recording. To address this problem, we propose the use of two different approaches. First, an adaptive filter with electrode-skin impedance as a reference signal is described. Secondly, a multi-channel ECG algorithm based on Independent Component Analysis is introduced. Both algorithms have been designed and further optimized for real-time work embedded in a dedicated Digital Signal Processor. We show that both algorithms improve the performance of a beat detection algorithm when applied in high noise conditions. In addition, an efficient way of choosing this methods is suggested with the aim of reduce the overall total system power consumption.

  2. 10 CFR 1045.46 - Classification by association or compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Classification by association or compilation. 1045.46 Section 1045.46 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NUCLEAR CLASSIFICATION AND....46 Classification by association or compilation. (a) If two pieces of unclassified information reveal...

  3. 22 CFR 227.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Senate and the Clerk of the House of Representatives a report containing a compilation of the information... 30 days after receipt of the report by the Secretary and the Clerk. (c) Information that involves... information shall not be available for public inspection. (e) The first semi-annual compilation shall...

  4. 38 CFR 45.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) NEW RESTRICTIONS ON LOBBYING Agency Reports § 45.600 Semi-annual compilation. (a) The head of... Representatives a report containing a compilation of the information contained in the disclosure reports received... report by the Secretary and the Clerk. (c) Information that involves intelligence matters shall...

  5. 40 CFR 34.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... NEW RESTRICTIONS ON LOBBYING Agency Reports § 34.600 Semi-annual compilation. (a) The head of each... report containing a compilation of the information contained in the disclosure reports received during... the Secretary and the Clerk. (c) Information that involves intelligence matters shall be reported...

  6. 12 CFR 411.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Reports § 411.600 Semi-annual compilation. (a) The head of each agency shall collect and compile the... information contained in the disclosure reports received during the six-month period ending on March 31 or... public inspection 30 days after receipt of the report by the Secretary and the Clerk. (c)...

  7. 22 CFR 138.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Senate and the Clerk of the House of Representatives a report containing a compilation of the information... 30 days after receipt of the report by the Secretary and the Clerk. (c) Information that involves... information shall not be available for public inspection. (e) The first semi-annual compilation shall...

  8. 32 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... REGULATIONS NEW RESTRICTIONS ON LOBBYING Agency Reports § 28.600 Semi-annual compilation. (a) The head of each... report containing a compilation of the information contained in the disclosure reports received during... the Secretary and the Clerk. (c) Information that involves intelligence matters shall be reported...

  9. 45 CFR 93.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... LOBBYING Agency Reports § 93.600 Semi-annual compilation. (a) The head of each agency shall collect and... compilation of the information contained in the disclosure reports received during the six-month period ending... Clerk. (c) Information that involves intelligence matters shall be reported only to the Select...

  10. 15 CFR 28.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agency Reports § 28.600 Semi-annual compilation. (a) The head of each agency shall collect and compile... the information contained in the disclosure reports received during the six-month period ending on.... (c) Information that involves intelligence matters shall be reported only to the Select Committee...

  11. 5 CFR 9701.524 - Compilation and publication of data.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... SECURITY HUMAN RESOURCES MANAGEMENT SYSTEM Labor-Management Relations § 9701.524 Compilation and... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Compilation and publication of data. 9701.524 Section 9701.524 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN RESOURCES...

  12. 29 CFR 70.5 - Compilation of new records.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 1 2014-07-01 2013-07-01 true Compilation of new records. 70.5 Section 70.5 Labor Office... Compilation of new records. Nothing in 5 U.S.C. 552 or this part requires that any agency or component create a new record in order to respond to a request for records. A component must, however, make...

  13. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  14. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  15. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  16. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  17. 24 CFR 87.600 - Semi-annual compilation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false Semi-annual compilation. 87.600 Section 87.600 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development NEW RESTRICTIONS ON LOBBYING Agency Reports § 87.600 Semi-annual compilation. (a) The head of...

  18. 7 CFR 1.21 - Compilation of new records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 1 2010-01-01 2010-01-01 false Compilation of new records. 1.21 Section 1.21 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.21 Compilation of new records. Nothing in 5 U.S.C. 552 or this subpart requires that any agency create a...

  19. 7 CFR 1.21 - Compilation of new records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Compilation of new records. 1.21 Section 1.21 Agriculture Office of the Secretary of Agriculture ADMINISTRATIVE REGULATIONS Official Records § 1.21 Compilation of new records. Nothing in 5 U.S.C. 552 or this subpart requires that any agency create a...

  20. Texture compilation for example-based synthesis

    NASA Astrophysics Data System (ADS)

    J, Amjath Ali; J, Janet

    2011-10-01

    In this paper, a new exemplar-based framework is presented, which treats image completion, texture synthesis and texture Analysis in a unified manner. In order to be able to avoid the occurrence of visually inconsistent results, we pose all of the image-editing tasks in the form of a discrete global optimization problem. The objective function of this problem is always well-defined, and corresponds to the energy of a discrete Markov Random Field (MRF). For efficiently optimizing this MRF, a novel optimization scheme, called Priority-BP, is then proposed, which carries two very important extensions over the standard Belief Propagation (BP) algorithm: "priority-based message scheduling" and "dynamic label pruning". These two extensions work in cooperation to deal with the intolerable computational cost of BP, which is caused by the huge number of labels associated with our MRF. In an Experimental results on a wide variety of input images are presented, which demonstrate the effectiveness of our image-completion framework for tasks such as object removal, texture synthesis, text removal and texture Analysis.

  1. Efficient and portable acceleration of quantum chemical many-body methods in mixed floating point precision using OpenACC compiler directives

    NASA Astrophysics Data System (ADS)

    Eriksen, Janus J.

    2017-09-01

    It is demonstrated how the non-proprietary OpenACC standard of compiler directives may be used to compactly and efficiently accelerate the rate-determining steps of two of the most routinely applied many-body methods of electronic structure theory, namely the second-order M{\\o}ller-Plesset (MP2) model in its resolution-of-the-identity (RI) approximated form and the (T) triples correction to the coupled cluster singles and doubles model (CCSD(T)). By means of compute directives as well as the use of optimized device math libraries, the operations involved in the energy kernels have been ported to graphics processing unit (GPU) accelerators, and the associated data transfers correspondingly optimized to such a degree that the final implementations (using either double and/or single precision arithmetics) are capable of scaling to as large systems as allowed for by the capacity of the host central processing unit (CPU) main memory. The performance of the hybrid CPU/GPU implementations is assessed through calculations on test systems of alanine amino acid chains using one-electron basis sets of increasing size (ranging from double- to pentuple-zeta quality). For all but the smallest problem sizes of the present study, the optimized accelerated codes (using a single multi-core CPU host node in conjunction with six GPUs) are found to be capable of reducing the total time-to-solution by at least an order of magnitude over optimized, OpenMP-threaded CPU-only reference implementations.

  2. Automatic controls and regulators: A compilation

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Devices, methods, and techniques for control and regulation of the mechanical/physical functions involved in implementing the space program are discussed. Section one deals with automatic controls considered to be, essentially, start-stop operations or those holding the activity in a desired constraint. Devices that may be used to regulate activities within desired ranges or subject them to predetermined changes are dealt with in section two.

  3. Compiling Polymorphism Using Intensional Type Analysis

    DTIC Science & Technology

    1994-09-02

    Fagan. Soft typing. In Proc. SIGPLAN 󈨟 Conference on Programming Language Design and Implementation, pages 278-292. ACM, June 1991. [12] Dominique...AC’M Transactions on Programming Languages and Systems, 10(3):470-502, 1988. [39] Greg Morrisett, Matthias Felleisen , and Robert Harper. Abstract models...seek to minimize boxing by taking advantage of whatever type information is manifest in the program . Despite these recent improvements, current

  4. Global Compilation of Marine Varve Records

    NASA Astrophysics Data System (ADS)

    Schimmelmann, A.; Lange, C.; Schieber, J.; Francus, P.; Ojala, A.; Zolitschka, B.

    2016-02-01

    Marine varves contain highly resolved records of geochemical and other paleoceanographic and paleoenvironmental proxies with annual to seasonal resolution. We present a global compilation of marine varved sedimentary records throughout the Holocene and Quaternary covering more than 50 sites worldwide. Marine varve deposition and preservation typically depend on environmental and sedimentological principles, such as a sufficiently high sedimentation rate, severe depletion of dissolved oxygen in bottom water to exclude bioturbation by macrobenthos, and a seasonally varying sedimentary input to yield a recognizable rhythmic varve pattern. Additional oceanographic factors may include the strength and depth range of the Oxygen Minimum Zone (OMZ) and regional anthropogenic eutrophication. Modern to Quaternary marine varves are not only found in those parts of the open ocean that comply with these principles, but also in fjords, embayments and estuaries with thermohaline density stratification, and nearshore `saline lakes' with strong hydrologic connections to ocean water. Marine varves have also been postulated in pre-Quaternary rocks. In the case of non-evaporitic laminations in fine-grained ancient marine rocks, laminations may not be varves but instead may have multiple alternative origins such as event beds or formation via bottom currents that transported and sorted silt-sized particles, clay floccules, and organic-mineral aggregates in the form of migrating bedload ripples. Modern marine ecosystems on continental shelves and slopes, in coastal zones and in estuaries are susceptible to stress by various factors that may result in oxygen-depletion in bottom waters. Sensitive laminated sites may play the important role of a `canary in the coal mine' where monitoring the character and geographical extent of laminations/varves serves as a diagnostic tool to judge environmental trends. Analyses of modern varve records will gain importance for simultaneously providing

  5. Global compilation of marine varve records

    NASA Astrophysics Data System (ADS)

    Schimmelmann, Arndt; Lange, Carina B.; Schieber, Juergen; Francus, Pierre; Ojala, Antti E. K.; Zolitschka, Bernd

    2017-04-01

    Marine varves contain highly resolved records of geochemical and other paleoceanographic and paleoenvironmental proxies with annual to seasonal resolution. We present a global compilation of marine varved sedimentary records throughout the Holocene and Quaternary covering more than 50 sites worldwide. Marine varve deposition and preservation typically depend on environmental and sedimentological conditions, such as a sufficiently high sedimentation rate, severe depletion of dissolved oxygen in bottom water to exclude bioturbation by macrobenthos, and a seasonally varying sedimentary input to yield a recognizable rhythmic varve pattern. Additional oceanographic factors may include the strength and depth range of the Oxygen Minimum Zone (OMZ) and regional anthropogenic eutrophication. Modern to Quaternary marine varves are not only found in those parts of the open ocean that comply with these conditions, but also in fjords, embayments and estuaries with thermohaline density stratification, and nearshore 'marine lakes' with strong hydrologic connections to ocean water. Marine varves have also been postulated in pre-Quaternary rocks. In the case of non-evaporitic laminations in fine-grained ancient marine rocks, such as banded iron formations and black shales, laminations may not be varves but instead may have multiple alternative origins such as event beds or formation via bottom currents that transported and sorted silt-sized particles, clay floccules, and organic-mineral aggregates in the form of migrating bedload ripples. Modern marine ecosystems on continental shelves and slopes, in coastal zones and in estuaries are susceptible to stress by anthropogenic pressures, for example in the form of eutrophication, enhanced OMZs, and expanding ranges of oxygen-depletion in bottom waters. Sensitive laminated sites may play the important role of a 'canary in the coal mine' where monitoring the character and geographical extent of laminations/varves serves as a diagnostic

  6. Implementations of the optimal multigrid algorithm for the cell-centered finite difference on equilateral triangular grids

    SciTech Connect

    Ewing, R.E.; Saevareid, O.; Shen, J.

    1994-12-31

    A multigrid algorithm for the cell-centered finite difference on equilateral triangular grids for solving second-order elliptic problems is proposed. This finite difference is a four-point star stencil in a two-dimensional domain and a five-point star stencil in a three dimensional domain. According to the authors analysis, the advantages of this finite difference are that it is an O(h{sup 2})-order accurate numerical scheme for both the solution and derivatives on equilateral triangular grids, the structure of the scheme is perhaps the simplest, and its corresponding multigrid algorithm is easily constructed with an optimal convergence rate. They are interested in relaxation of the equilateral triangular grid condition to certain general triangular grids and the application of this multigrid algorithm as a numerically reasonable preconditioner for the lowest-order Raviart-Thomas mixed triangular finite element method. Numerical test results are presented to demonstrate their analytical results and to investigate the applications of this multigrid algorithm on general triangular grids.

  7. Complexity Optimization and High-Throughput Low-Latency Hardware Implementation of a Multi-Electrode Spike-Sorting Algorithm

    PubMed Central

    Dragas, Jelena; Jäckel, David; Hierlemann, Andreas; Franke, Felix

    2017-01-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction. PMID:25415989

  8. Complexity optimization and high-throughput low-latency hardware implementation of a multi-electrode spike-sorting algorithm.

    PubMed

    Dragas, Jelena; Jackel, David; Hierlemann, Andreas; Franke, Felix

    2015-03-01

    Reliable real-time low-latency spike sorting with large data throughput is essential for studies of neural network dynamics and for brain-machine interfaces (BMIs), in which the stimulation of neural networks is based on the networks' most recent activity. However, the majority of existing multi-electrode spike-sorting algorithms are unsuited for processing high quantities of simultaneously recorded data. Recording from large neuronal networks using large high-density electrode sets (thousands of electrodes) imposes high demands on the data-processing hardware regarding computational complexity and data transmission bandwidth; this, in turn, entails demanding requirements in terms of chip area, memory resources and processing latency. This paper presents computational complexity optimization techniques, which facilitate the use of spike-sorting algorithms in large multi-electrode-based recording systems. The techniques are then applied to a previously published algorithm, on its own, unsuited for large electrode set recordings. Further, a real-time low-latency high-performance VLSI hardware architecture of the modified algorithm is presented, featuring a folded structure capable of processing the activity of hundreds of neurons simultaneously. The hardware is reconfigurable “on-the-fly” and adaptable to the nonstationarities of neuronal recordings. By transmitting exclusively spike time stamps and/or spike waveforms, its real-time processing offers the possibility of data bandwidth and data storage reduction.

  9. Optimization of Surface-Enhanced Raman Spectroscopy Conditions for Implementation into a Microfluidic Device for Drug Detection.

    PubMed

    Kline, Neal D; Tripathi, Ashish; Mirsafavi, Rustin; Pardoe, Ian; Moskovits, Martin; Meinhart, Carl; Guicheteau, Jason A; Christesen, Steven D; Fountain, Augustus W

    2016-11-01

    A microfluidic device is being developed by University of California-Santa Barbara as part of a joint effort with the United States Army to develop a portable, rapid drug detection device. Surface-enhanced Raman spectroscopy (SERS) is used to provide a sensitive, selective detection technique within the microfluidic platform employing metallic nanoparticles as the SERS medium. Using several illicit drugs as analytes, the work presented here describes the efforts of the Edgewood Chemical Biological Center to optimize the microfluidic platform by investigating the role of nanoparticle material, nanoparticle size, excitation wavelength, and capping agents on the performance, and drug concentration detection limits achievable with Ag and Au nanoparticles that will ultimately be incorporated into the final design. This study is particularly important as it lays out a systematic comparison of limits of detection and potential interferences from working with several nanoparticle capping agents-such as tannate, citrate, and borate-which does not seem to have been done previously as the majority of studies only concentrate on citrate as the capping agent. Morphine, cocaine, and methamphetamine were chosen as test analytes for this study and were observed to have limits of detection (LOD) in the range of (1.5-4.7) × 10(-8) M (4.5-13 ng/mL), with the borate capping agent having the best performance.

  10. Implementation of quality by design approach in manufacturing process optimization of dry granulated, immediate release, coated tablets - a case study.

    PubMed

    Teżyk, Michał; Jakubowska, Emilia; Milanowski, Bartłomiej; Lulek, Janina

    2017-10-01

    The aim of this study was to optimize the process of tablets compression and identification of film-coating critical process parameters (CPPs) affecting critical quality attributes (CQAs) using quality by design (QbD) approach. Design of experiment (DOE) and regression methods were employed to investigate hardness, disintegration time, and thickness of uncoated tablets depending on slugging and tableting compression force (CPPs). Plackett-Burman experimental design was applied to identify critical coating process parameters among selected ones that is: drying and preheating time, atomization air pressure, spray rate, air volume, inlet air temperature, and drum pressure that may influence the hardness and disintegration time of coated tablets. As a result of the research, design space was established to facilitate an in-depth understanding of existing relationship between CPPs and CQAs of intermediate product (uncoated tablets). Screening revealed that spray rate and inlet air temperature are two most important factors that affect the hardness of coated tablets. Simultaneously, none of the tested coating factors have influence on disintegration time. The observation was confirmed by conducting film coating of pilot size batches.

  11. Computer-assisted school bus routing and scheduling optimization. An evaluation of potential fuel savings and implementation alternatives

    SciTech Connect

    McCoy, G.A.; Mandlebaum, R.

    1985-11-01

    School Bus Routing and Scheduling Optimization (SBRSO) systems can substantially reduce school bus fleet operating costs. Fuel savings in excess of 450,000 gallons per year are achievable and a 10% decrease in route miles is attainable given computerized or computer-assisted SBRSO system use by the 32 Washington school districts operating bus fleets of at least 30 vehicles. Additional annual savings in excess of $3 million are possible assuming an 8% reduction in bus fleet size is made possible due to routing efficiency improvements. Three computerized SBRSO programs are examined, differing in the degree of state involvement and level of decentralization. We recommend the Washington State Energy Office (WSEO) acquire available low cost public domain SBRSO systems, convert the software to IBM and DEC compatibility, and demonstrate the software capabilities with at least one school district fleet. The most acceptable SBRSO system would then be disseminated and training offered to interested school districts, Educational Service Districts, and the Superintendent of Public Instruction's regional pupil transportation coordinators. If the existing public domain SBRSO systems prove unsatisfactory, or suitable only for rural districts, we recommend that the WSEO allocate oil company rebate monies for the development of a suitable SBRSO system. Training workshops would then be held when the SBRSO software was completed.

  12. Optimization of ethylene glycol production from (D)-xylose via a synthetic pathway implemented in Escherichia coli.

    PubMed

    Alkim, Ceren; Cam, Yvan; Trichez, Debora; Auriol, Clément; Spina, Lucie; Vax, Amélie; Bartolo, François; Besse, Philippe; François, Jean Marie; Walther, Thomas

    2015-09-04

    Ethylene glycol (EG) is a bulk chemical that is mainly used as an anti-freezing agent and a raw material in the synthesis of plastics. Production of commercial EG currently exclusively relies on chemical synthesis using fossil resources. Biochemical production of ethylene glycol from renewable resources may be more sustainable. Herein, a synthetic pathway is described that produces EG in Escherichia coli through the action of (D)-xylose isomerase, (D)-xylulose-1-kinase, (D)-xylulose-1-phosphate aldolase, and glycolaldehyde reductase. These reactions were successively catalyzed by the endogenous xylose isomerase (XylA), the heterologously expressed human hexokinase (Khk-C) and aldolase (Aldo-B), and an endogenous glycolaldehyde reductase activity, respectively, which we showed to be encoded by yqhD. The production strain was optimized by deleting the genes encoding for (D)-xylulose-5 kinase (xylB) and glycolaldehyde dehydrogenase (aldA), and by overexpressing the candidate glycolaldehyde reductases YqhD, GldA, and FucO. The strain overproducing FucO was the best EG producer reaching a molar yield of 0.94 in shake flasks, and accumulating 20 g/L EG with a molar yield and productivity of 0.91 and 0.37 g/(L.h), respectively, in a controlled bioreactor under aerobic conditions. We have demonstrated the feasibility to produce EG from (D)-xylose via a synthetic pathway in E. coli at approximately 90 % of the theoretical yield.

  13. Optimizing symmetry-based recoupling sequences in solid-state NMR by pulse-transient compensation and asynchronous implementation

    NASA Astrophysics Data System (ADS)

    Hellwagner, Johannes; Sharma, Kshama; Tan, Kong Ooi; Wittmann, Johannes J.; Meier, Beat H.; Madhu, P. K.; Ernst, Matthias

    2017-06-01

    Pulse imperfections like pulse transients and radio-frequency field maladjustment or inhomogeneity are the main sources of performance degradation and limited reproducibility in solid-state nuclear magnetic resonance experiments. We quantitatively analyze the influence of such imperfections on the performance of symmetry-based pulse sequences and describe how they can be compensated. Based on a triple-mode Floquet analysis, we develop a theoretical description of symmetry-based dipolar recoupling sequences, in particular, R2 6411, calculating first- and second-order effective Hamiltonians using real pulse shapes. We discuss the various origins of effective fields, namely, pulse transients, deviation from the ideal flip angle, and fictitious fields, and develop strategies to counteract them for the restoration of full transfer efficiency. We compare experimental applications of transient-compensated pulses and an asynchronous implementation of the sequence to a supercycle, SR26, which is known to be efficient in compensating higher-order error terms. We are able to show the superiority of R26 compared to the supercycle, SR26, given the ability to reduce experimental error on the pulse sequence by pulse-transient compensation and a complete theoretical understanding of the sequence.

  14. Automatic Compilation from High-Level Biologically-Oriented Programming Language to Genetic Regulatory Networks

    PubMed Central

    Beal, Jacob; Lu, Ting; Weiss, Ron

    2011-01-01

    Background The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. Methodology/Principal Findings To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes () and latency of the optimized engineered gene networks. Conclusions/Significance Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems. PMID:21850228

  15. Compiler Directed Memory Management for Numerical Programs.

    DTIC Science & Technology

    1986-08-01

    fault rate cf a process depends ,n the intrinsic beha. :or ,l l~ e e and .’n the interactin ,If the mitpiregramining mix through s’xL-.pping. Whether the...enough memory to accommodate one of its localities, no matter how small the locality is. Various schemes of partial swapping could be implemented. For...pages if Sc > P (Figure 3-7b). In the first case, no matter how many row indexes are used to designate a particular element. only one page could be

  16. [Process optimization by central control of acute pain therapy: implementation of standardized treatment concepts and central pain management in hospitals].

    PubMed

    Erlenwein, J; Stüder, D; Lange, J-P; Bauer, M; Petzke, F; Przemeck, M

    2012-11-01

    The aim of this investigation was to describe the effects of standardization and central control of the processes involved in postoperative pain management from patient and employee perspectives. Patients (n = 282/307) and respective hospital staff (n = 149/119) evaluated the processes, the quality of postoperative pain management and result parameters 3 months before and 12 months after the introduction of standardization of the postoperative pain therapy process using a set of standardized questionnaires. Pain level and the waiting period for an analgesic partially decreased and a higher subjective effectiveness of medication was achieved in patients after the standardization. Patients felt that the pain was taken more seriously and contacted the staff for additional medication more frequently. From an employee viewpoint the quality of care and individual competence and ability to treat pain increased after the introduction of standardization. Pain assessment was improved and employees rated their knowledge and education level as higher than before the intervention. Patients with pre-existing chronic pain and patients with special regional therapy benefited only partially after the introduction and an increase in pain intensity was even observed. The quality of care was improved by standardization of the postoperative pain management. The legal and practical ability of the nursing stuff to administer pain medication within well-defined margins reduced the dependence on the ward doctor and at the same time patient pain levels. Patients received analgesics more quickly and experienced increased effectiveness. These results should be an incentive to reconsider the importance of the organization of postoperative pain management, because the quality of care with all potential medical and economic advantages, can be easily optimized by such simple mechanisms. They also show that the quality assessment of acute pain and the selection of appropriate indicators

  17. Design and implementation of a calibrated hyperspectral small-animal imager: Practical and theoretical aspects of system optimization

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas Josiah

    Pre-clinical imaging has been an important development within the bioscience and pharmacology fields. A rapidly growing area within these fields is small animal fluorescence imaging, in which molecularly targeted fluorescent probes are used to non-invasively image internal events on a gross anatomical scale. Small-animal fluorescence imaging has transitioned from a research technique to pre-clinical technology very quickly, due to its molecular specificity, low cost, and relative ease of use. In addition, its potential uses in gene therapy and as a translational technology are becoming evident. This thesis outlines the development of an alternative modality for small animal/tissue imaging, using hyperspectral techniques to enable the collection of fluorescence images at different excitation and emission wavelengths. In specific, acousto-optical tunable filters (AOTFs) were used to construct emission-wavelength-scanning and excitation-wavelength-scanning small animal fluorescence imagers. Statistical, classification, and unmixing algorithms have been employed to extract specific fluorescent-dye information from hyperspectral image sets. In this work, we have designed and implemented hyperspectral imaging and analysis techniques to remove background autofluorescence from the desired fluorescence signal, resulting in highly specific and localized fluorescence. Therefore, in practice, it is possible to more accurately pin-point the location and size of diagnostic anatomical markers (e.g. tumors) labeled with fluorescent probes. Furthermore, multiple probes can be individually distinguished. In addition to imaging hardware and acquisition and analysis software, we have designed an optical tissue phantom for quality control and inter-system comparison. The phantom has been modeled using Monte Carlo techniques. The culmination of this work results in an understanding of the advantages and complexities in applying hyperspectral techniques to small animal fluorescence

  18. Implementation of technetium-99m MIBI SPECT imaging guidelines: optimizing the two day stress-rest protocol.

    PubMed

    Lavalaye, J M; Schroeder-Tanka, J M; Tiel-van Buul, M M; van der Wall, E E; Lie, K I; van Royen, E A

    1997-08-01

    In a previous study in 460 patients, we found that in patients with suspected or known coronary artery disease undergoing stress-rest technetium-99m sestamibi (MIBI) SPECT myocardial perfusion imaging, rest SPECT imaging could be withhold in approximately 20% of patients because of a completely normal stress study. The present study was set up to evaluate the consequences of the implementation of this finding in a subsequent population of patients, and to set standards for the variety of protocols now used for MIBI SPECT imaging. Within a period of 4 months, 235 consecutive patients referred for MIBI SPECT scintigraphy were studied. All patients had stable cardiac chest pain and underwent symptom-limited exercise MIBI SPECT perfusion imaging. The stress SPECT images were reconstructed and evaluated immediately after acquisition of the images. In case of a clearly normal stress SPECT study, rest imaging was cancelled. Twenty-six of 235 patients (11%) had a completely normal stress MIBI SPECT study and the rest SPECT imaging procedure could be subsequently cancelled. In 20 patients (9%) the stress SPECT was inconclusive, and in 189 (80%) of patients stress imaging was clearly abnormal. In the first month of the study, the nuclear medicine physicians and cardiologists would interprete only 6% of the stress images as normal, while this number increased to 13% after 9 weeks, with a mean of 11% for the whole investigation period of 4 months. In patients undergoing stress MIBI SPECT imaging, it was found justified to cancel rest MIBI SPECT imaging in at least 11% of patients because of a completely normal stress SPECT. As 9% of the images were inconclusive, the number of normal stress images could theoretically increase to 20% if reliable measures are taken to improve reading accuracy. This number is in close agreement with the number of normal stress studies previously reported by our institution and would lead to a considerable reduction of radiation dose, costs, and

  19. Strategies for Optimal Implementation of Simulated Clients for Measuring Quality of Care in Low- and Middle-Income Countries

    PubMed Central

    Fitzpatrick, Anne; Tumlinson, Katherine

    2017-01-01

    ABSTRACT The use of simulated clients or “mystery clients” is a data collection approach in which a study team member presents at a health care facility or outlet pretending to be a real customer, patient, or client. Following the visit, the shopper records her observations. The use of mystery clients can overcome challenges of obtaining accurate measures of health care quality and improve the validity of quality assessments, particularly in low- and middle-income countries. However, mystery client studies should be carefully designed and monitored to avoid problems inherent to this data collection approach. In this article, we discuss our experiences with the mystery client methodology in studies conducted in public- and private-sector health facilities in Kenya and in private-sector facilities in Uganda. We identify both the benefits and the challenges in using this methodology to guide other researchers interested in using this technique. Recruitment of appropriate mystery clients who accurately represent the facility's clientele, have strong recall of recent events, and are comfortable in their role as undercover data collectors are key to successful implementation of this methodology. Additionally, developing detailed training protocols can help ensure mystery clients behave identically and mimic real patrons accurately while short checklists can help ensure mystery client responses are standardized. Strict confidentiality and protocols to avoid unnecessary exams or procedures should also be stressed during training and monitored carefully throughout the study. Despite these challenges, researchers should consider mystery client designs to measure actual provider behavior and to supplement self-reported provider behavior. Data from mystery client studies can provide critical insight into the quality of service provision unavailable from other data collection methods. The unique information available from the mystery client approach far outweighs the cost

  20. Strategies for Optimal Implementation of Simulated Clients for Measuring Quality of Care in Low- and Middle-Income Countries.

    PubMed

    Fitzpatrick, Anne; Tumlinson, Katherine

    2017-01-26

    The use of simulated clients or "mystery clients" is a data collection approach in which a study team member presents at a health care facility or outlet pretending to be a real customer, patient, or client. Following the visit, the shopper records her observations. The use of mystery clients can overcome challenges of obtaining accurate measures of health care quality and improve the validity of quality assessments, particularly in low- and middle-income countries. However, mystery client studies should be carefully designed and monitored to avoid problems inherent to this data collection approach. In this article, we discuss our experiences with the mystery client methodology in studies conducted in public- and private-sector health facilities in Kenya and in private-sector facilities in Uganda. We identify both the benefits and the challenges in using this methodology to guide other researchers interested in using this technique. Recruitment of appropriate mystery clients who accurately represent the facility's clientele, have strong recall of recent events, and are comfortable in their role as undercover data collectors are key to successful implementation of this methodology. Additionally, developing detailed training protocols can help ensure mystery clients behave identically and mimic real patrons accurately while short checklists can help ensure mystery client responses are standardized. Strict confidentiality and protocols to avoid unnecessary exams or procedures should also be stressed during training and monitored carefully throughout the study. Despite these challenges, researchers should consider mystery client designs to measure actual provider behavior and to supplement self-reported provider behavior. Data from mystery client studies can provide critical insight into the quality of service provision unavailable from other data collection methods. The unique information available from the mystery client approach far outweighs the cost.