Sample records for compiler backend optimization

  1. ProjectQ: Compiling quantum programs for various backends

    NASA Astrophysics Data System (ADS)

    Haener, Thomas; Steiger, Damian S.; Troyer, Matthias

    In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornung, Richard D.; Hones, Holger E.

    The RAJA Performance Suite is designed to evaluate performance of the RAJA performance portability library on a wide variety of important high performance computing (HPC) algorithmic lulmels. These kernels assess compiler optimizations and various parallel programming model backends accessible through RAJA, such as OpenMP, CUDA, etc. The Initial version of the suite contains 25 computational kernels, each of which appears in 6 variants: Baseline SequcntiaJ, RAJA SequentiaJ, Baseline OpenMP, RAJA OpenMP, Baseline CUDA, RAJA CUDA. All variants of each kernel perform essentially the same mathematical operations and the loop body code for each kernel is identical across all variants. Theremore » are a few kernels, such as those that contain reduction operations, that require CUDA-specific coding for their CUDA variants. ActuaJ computer instructions executed and how they run in parallel differs depending on the parallel programming model backend used and which optimizations are perfonned by the compiler used to build the Perfonnance Suite executable. The Suite will be used primarily by RAJA developers to perform regular assessments of RAJA performance across a range of hardware platforms and compilers as RAJA features are being developed. It will also be used by LLNL hardware and software vendor panners for new defining requirements for future computing platform procurements and acceptance testing. In particular, the RAJA Performance Suite will be used for compiler acceptance testing of the upcoming CORAUSierra machine {initial LLNL delivery expected in late-2017/early 2018) and the CORAL-2 procurement. The Suite will aJso be used to generate concise source code reproducers of compiler and runtime issues we uncover so that we may provide them to relevant vendors to be fixed.« less

  3. Semantic Language Extensions for Implicit Parallel Programming

    DTIC Science & Technology

    2013-09-01

    mobile CPU interacts with a GPU on the same device and a cloud based backend at a remote location presents endless possibilities for solving com...for his contribution to the compiler infrastructure . His creativity in solving research problems and expertise in architecting and implementing...92 5.5.1 Frontend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.5.2 Backend

  4. Protocol Programmability

    DTIC Science & Technology

    2013-12-01

    First, any subproject that involved an implementation shared some implementation infrastructure with other subprojects. For example, the Plaid backend ...very same language. We followed this advice in Plaid, and we therefore implemented the compiler backend in Plaid (code generation, type checker, Æminim...programming language aimed at enforcing security properties in web and mobile applications [Nistor et al., 2013]. Wyvern therefore provides an excellent

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Seyong; Kim, Jungwon; Vetter, Jeffrey S

    This paper presents a directive-based, high-level programming framework for high-performance reconfigurable computing. It takes a standard, portable OpenACC C program as input and generates a hardware configuration file for execution on FPGAs. We implemented this prototype system using our open-source OpenARC compiler; it performs source-to-source translation and optimization of the input OpenACC program into an OpenCL code, which is further compiled into a FPGA program by the backend Altera Offline OpenCL compiler. Internally, the design of OpenARC uses a high- level intermediate representation that separates concerns of program representation from underlying architectures, which facilitates portability of OpenARC. In fact, thismore » design allowed us to create the OpenACC-to-FPGA translation framework with minimal extensions to our existing system. In addition, we show that our proposed FPGA-specific compiler optimizations and novel OpenACC pragma extensions assist the compiler in generating more efficient FPGA hardware configuration files. Our empirical evaluation on an Altera Stratix V FPGA with eight OpenACC benchmarks demonstrate the benefits of our strategy. To demonstrate the portability of OpenARC, we show results for the same benchmarks executing on other heterogeneous platforms, including NVIDIA GPUs, AMD GPUs, and Intel Xeon Phis. This initial evidence helps support the goal of using a directive-based, high-level programming strategy for performance portability across heterogeneous HPC architectures.« less

  6. An extension of the OpenModelica compiler for using Modelica models in a discrete event simulation

    DOE PAGES

    Nutaro, James

    2014-11-03

    In this article, a new back-end and run-time system is described for the OpenModelica compiler. This new back-end transforms a Modelica model into a module for the adevs discrete event simulation package, thereby extending adevs to encompass complex, hybrid dynamical systems. The new run-time system that has been built within the adevs simulation package supports models with state-events and time-events and that comprise differential-algebraic systems with high index. Finally, although the procedure for effecting this transformation is based on adevs and the Discrete Event System Specification, it can be adapted to any discrete event simulation package.

  7. OSCAR API for Real-Time Low-Power Multicores and Its Performance on Multicores and SMP Servers

    NASA Astrophysics Data System (ADS)

    Kimura, Keiji; Mase, Masayoshi; Mikami, Hiroki; Miyamoto, Takamichi; Shirako, Jun; Kasahara, Hironori

    OSCAR (Optimally Scheduled Advanced Multiprocessor) API has been designed for real-time embedded low-power multicores to generate parallel programs for various multicores from different vendors by using the OSCAR parallelizing compiler. The OSCAR API has been developed by Waseda University in collaboration with Fujitsu Laboratory, Hitachi, NEC, Panasonic, Renesas Technology, and Toshiba in an METI/NEDO project entitled "Multicore Technology for Realtime Consumer Electronics." By using the OSCAR API as an interface between the OSCAR compiler and backend compilers, the OSCAR compiler enables hierarchical multigrain parallel processing with memory optimization under capacity restriction for cache memory, local memory, distributed shared memory, and on-chip/off-chip shared memory; data transfer using a DMA controller; and power reduction control using DVFS (Dynamic Voltage and Frequency Scaling), clock gating, and power gating for various embedded multicores. In addition, a parallelized program automatically generated by the OSCAR compiler with OSCAR API can be compiled by the ordinary OpenMP compilers since the OSCAR API is designed on a subset of the OpenMP. This paper describes the OSCAR API and its compatibility with the OSCAR compiler by showing code examples. Performance evaluations of the OSCAR compiler and the OSCAR API are carried out using an IBM Power5+ workstation, an IBM Power6 high-end SMP server, and a newly developed consumer electronics multicore chip RP2 by Renesas, Hitachi and Waseda. From the results of scalability evaluation, it is found that on an average, the OSCAR compiler with the OSCAR API can exploit 5.8 times speedup over the sequential execution on the Power5+ workstation with eight cores and 2.9 times speedup on RP2 with four cores, respectively. In addition, the OSCAR compiler can accelerate an IBM XL Fortran compiler up to 3.3 times on the Power6 SMP server. Due to low-power optimization on RP2, the OSCAR compiler with the OSCAR API achieves a maximum power reduction of 84% in the real-time execution mode.

  8. DTS: Building custom, intelligent schedulers

    NASA Technical Reports Server (NTRS)

    Hansson, Othar; Mayer, Andrew

    1994-01-01

    DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.

  9. RPython high-level synthesis

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw; Linczuk, Maciej

    2016-09-01

    The development of FPGA technology and the increasing complexity of applications in recent decades have forced compilers to move to higher abstraction levels. Compilers interprets an algorithmic description of a desired behavior written in High-Level Languages (HLLs) and translate it to Hardware Description Languages (HDLs). This paper presents a RPython based High-Level synthesis (HLS) compiler. The compiler get the configuration parameters and map RPython program to VHDL. Then, VHDL code can be used to program FPGA chips. In comparison of other technologies usage, FPGAs have the potential to achieve far greater performance than software as a result of omitting the fetch-decode-execute operations of General Purpose Processors (GPUs), and introduce more parallel computation. This can be exploited by utilizing many resources at the same time. Creating parallel algorithms computed with FPGAs in pure HDL is difficult and time consuming. Implementation time can be greatly reduced with High-Level Synthesis compiler. This article describes design methodologies and tools, implementation and first results of created VHDL backend for RPython compiler.

  10. ORA User’s Guide 2013

    DTIC Science & Technology

    2013-06-03

    and a C++ computational backend . The most current version of ORA (3.0.8.5) software is available on the casos website: http://casos.cs.cmu.edu...optimizing a network’s design structure. ORA uses a Java interface for ease of use, and a C++ computational backend . The most current version of ORA...Eigenvector Centrality : Node most connected to other highly connected nodes. Assists in identifying those who can mobilize others Entity Class

  11. Compilation of Abstracts of Theses Submitted by Candidates for Degrees

    DTIC Science & Technology

    1987-09-30

    Paral- lel, Multiple Backend Database Systems Feudo, C.V. Modern Hardware Tochnololies 88 MAJ , USA 8nd. Sof ware Techniques for Online uatabase Storage...and itsApplication in the War- gaming , Reseamth and Analysis (W.A.R.) Lab Waltens erger, G.M. On Limited War, Escalation 524 CPT,, USRF Control, and...TECHNIQIUES FOR ONLINE DATABASE ,TORAGE AND ACCESS Christopher V. Feudo Ma or, United States Army B.S., United States Military Academy# 1972

  12. Architectural-level power estimation and experimentation

    NASA Astrophysics Data System (ADS)

    Ye, Wu

    With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.

  13. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  14. ADA Implementation Issues as Discovered through a Literature Survey of Applications Outside the United States

    DTIC Science & Technology

    1992-03-01

    compile time, ensuring that operations conducted are appropriate for the object type. Each implementation requires a database known as the program...Finnish bank being developed by Nokia • Oil drilling control system managed by Sedco- Forex * Vigile - an industrial installation supervisor project by...user interface and Oracle database backend control. The software is being developed in Ada under DOD-STD-2167 under OS/2. BELGIUM BATS S.A. Project title

  15. The Implementation of a Multi-Backend Database System (MDBS). Part I. Software Engineering Strategies and Efforts Towards a Prototype MDBS.

    DTIC Science & Technology

    1983-06-01

    for DEC PDPll systems. MAINSAIL was developed and is marketed with a set of integrated tools for program development. The syntax of the language is...stack, and to test for stack-full and stack-empty conditions. This technique is useful in enforcing data integrity and in con- trolling concurrent...and market MAINSAIL. The language is distinguished by its portability. The same compiler and runtime system, both written in MAINSAIL, are the basis

  16. a Framework for Distributed Mixed Language Scientific Applications

    NASA Astrophysics Data System (ADS)

    Quarrie, D. R.

    The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.

  17. The Development of Design Tools for Fault Tolerant Quantum Dot Cellular Automata Based Logic

    NASA Technical Reports Server (NTRS)

    Armstrong, Curtis D.; Humphreys, William M.

    2003-01-01

    We are developing software to explore the fault tolerance of quantum dot cellular automata gate architectures in the presence of manufacturing variations and device defects. The Topology Optimization Methodology using Applied Statistics (TOMAS) framework extends the capabilities of the A Quantum Interconnected Network Array Simulator (AQUINAS) by adding front-end and back-end software and creating an environment that integrates all of these components. The front-end tools establish all simulation parameters, configure the simulation system, automate the Monte Carlo generation of simulation files, and execute the simulation of these files. The back-end tools perform automated data parsing, statistical analysis and report generation.

  18. A three-dimensional bucking system for optimal bucking of Central Appalachian hardwoods

    Treesearch

    Jingxin Wang; Jingang Liu; Chris B. LeDoux

    2009-01-01

    An optimal tree stembucking systemwas developed for central Appalachian hardwood species using three-dimensional (3D) modeling techniques. ActiveX Data Objects were implemented via MS Visual C++/OpenGL to manipulate tree data which were supported by a backend relational data model with five data entity types for stems, grades and prices, logs, defects, and stem shapes...

  19. Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations

    DOEpatents

    Gschwind, Michael K

    2013-07-23

    Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.

  20. 77 FR 26736 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... an Internet Push methodology, in an effort to obtain early response rate indicators for the 2020... contact strategies involving optimizing the Internet push strategy are proposed, such as implementing... reducing and/or eliminating back-end processing. Affected Public: Individuals or households. Frequency: One...

  1. Prototype of the novel CAMEA concept—A backend for neutron spectrometers

    NASA Astrophysics Data System (ADS)

    Markó, Márton; Groitl, Felix; Birk, Jonas Okkels; Freeman, Paul Gregory; Lefmann, Kim; Christensen, Niels Bech; Niedermayer, Christof; Jurányi, Fanni; Lass, Jakob; Hansen, Allan; Rønnow, Henrik M.

    2018-01-01

    The continuous angle multiple energy analysis concept is a backend for both time-of-flight and analyzer-based neutron spectrometers optimized for neutron spectroscopy with highly efficient mapping in the horizontal scattering plane. The design employs a series of several upward scattering analyzer arcs placed behind each other, which are set to different final energies allowing a wide angular coverage with multiple energies recorded simultaneously. For validation of the concept and the model calculations, a prototype was installed at the Swiss neutron source SINQ, Paul Scherrer Institut. The design of the prototype, alignment and calibration procedures, experimental results of background measurements, and proof-of-concept inelastic measurements on LiHoF4 and h-YMnO3 are presented here.

  2. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  3. A Language for Specifying Compiler Optimizations for Generic Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less

  4. PALM-3000: EXOPLANET ADAPTIVE OPTICS FOR THE 5 m HALE TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dekany, Richard; Bouchez, Antonin; Baranec, Christoph

    2013-10-20

    We describe and report first results from PALM-3000, the second-generation astronomical adaptive optics (AO) facility for the 5.1 m Hale telescope at Palomar Observatory. PALM-3000 has been engineered for high-contrast imaging and emission spectroscopy of brown dwarfs and large planetary mass bodies at near-infrared wavelengths around bright stars, but also supports general natural guide star use to V ≈ 17. Using its unique 66 × 66 actuator deformable mirror, PALM-3000 has thus far demonstrated residual wavefront errors of 141 nm rms under ∼1'' seeing conditions. PALM-3000 can provide phase conjugation correction over a 6.''4 × 6.''4 working region at λmore » = 2.2 μm, or full electric field (amplitude and phase) correction over approximately one-half of this field. With optimized back-end instrumentation, PALM-3000 is designed to enable 10{sup –7} contrast at 1'' angular separation, including post-observation speckle suppression processing. While continued optimization of the AO system is ongoing, we have already successfully commissioned five back-end instruments and begun a major exoplanet characterization survey, Project 1640.« less

  5. Context-sensitive trace inlining for Java.

    PubMed

    Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter

    2013-12-01

    Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.

  6. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  7. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  8. Crowd-Sourced Help with Emergent Knowledge for Optimized Formal Verification (CHEKOFV)

    DTIC Science & Technology

    2016-03-01

    up game Binary Fission, which was deployed during Phase Two of CHEKOFV. Xylem: The Code of Plants is a casual game for players using mobile ...there are the design and engineering challenges of building a game infrastructure that integrates verification technology with crowd participation...the backend processes that annotate the originating software. Allowing players to construct their own equations opened up the flexibility to receive

  9. Temporal Planning for Compilation of Quantum Approximate Optimization Algorithm Circuits

    NASA Technical Reports Server (NTRS)

    Venturelli, Davide; Do, Minh Binh; Rieffel, Eleanor Gilbert; Frank, Jeremy David

    2017-01-01

    We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus our initial experiments on Quantum Approximate Optimization Algorithm (QAOA) circuits that have few ordering constraints and allow highly parallel plans. We report on experiments using several temporal planners to compile circuits of various sizes to a realistic hardware. This early empirical evaluation suggests that temporal planning is a viable approach to quantum circuit compilation.

  10. HAL/S-FC compiler system functional specification

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.

  11. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Back-end process provisions-monitoring... Polymers and Resins § 63.497 Back-end process provisions—monitoring provisions for control and recovery devices. (a) An owner or operator complying with the residual organic HAP limitations in § 63.494(a) using...

  12. Cyber-Physical Multi-Core Optimization for Resource and Cache Effects (C2ORES)

    DTIC Science & Technology

    2014-03-01

    DoD-sponsored ATAACK mobile cloud testbed funded through the DURIP program, which is deployed at Virginia Tech and Vanderbilt University to conduct...0.9.2. Jug was configured to use a filesystem (network file system (nfs)) backend for locking and task synchronization. 4.1.7.2 Experiment 1...and performance-aware virtual machine placement technique that is realized as cloud infrastructure middleware. The key contributions of iPlace include

  13. A survey of compiler development aids. [concerning lexical, syntax, and semantic analysis

    NASA Technical Reports Server (NTRS)

    Buckles, B. P.; Hodges, B. C.; Hsia, P.

    1977-01-01

    A theoretical background was established for the compilation process by dividing it into five phases and explaining the concepts and algorithms that underpin each. The five selected phases were lexical analysis, syntax analysis, semantic analysis, optimization, and code generation. Graph theoretical optimization techniques were presented, and approaches to code generation were described for both one-pass and multipass compilation environments. Following the initial tutorial sections, more than 20 tools that were developed to aid in the process of writing compilers were surveyed. Eight of the more recent compiler development aids were selected for special attention - SIMCMP/STAGE2, LANG-PAK, COGENT, XPL, AED, CWIC, LIS, and JOCIT. The impact of compiler development aids were assessed some of their shortcomings and some of the areas of research currently in progress were inspected.

  14. A software methodology for compiling quantum programs

    NASA Astrophysics Data System (ADS)

    Häner, Thomas; Steiger, Damian S.; Svore, Krysta; Troyer, Matthias

    2018-04-01

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. To achieve scalable quantum computation, optimizing compilers and a corresponding software design flow will be essential. We present a software architecture for compiling quantum programs from a high-level language program to hardware-specific instructions. We describe the necessary layers of abstraction and their differences and similarities to classical layers of a computer-aided design flow. For each layer of the stack, we discuss the underlying methods for compilation and optimization. Our software methodology facilitates more rapid innovation among quantum algorithm designers, quantum hardware engineers, and experimentalists. It enables scalable compilation of complex quantum algorithms and can be targeted to any specific quantum hardware implementation.

  15. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices used to comply...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Back-end process provisions-monitoring... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.497 Back-end process... limitations. (a) An owner or operator complying with the residual organic HAP limitations in § 63.494(a)(1...

  16. Compiling quantum circuits to realistic hardware architectures using temporal planners

    NASA Astrophysics Data System (ADS)

    Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy

    2018-04-01

    To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.

  17. New Abstractions for Mobile Connectivity and Resource Management

    DTIC Science & Technology

    2016-05-01

    networked systems, con- sisting of replicated backend services and mobile , multi-homed clients. We derive a state machine for ECCP supporting migration...makes ECCP useful not only for mobility of client devices, but also for backend services which are increasingly run in VMs or containers on platforms...layers of the network stack, instead of the traditional IP/port, improve mobility for clients and backend services and reduce unnecessary coupling of

  18. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models.

    PubMed

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients.

  19. Virtual Systems Pharmacology (ViSP) software for simulation from mechanistic systems-level models

    PubMed Central

    Ermakov, Sergey; Forster, Peter; Pagidala, Jyotsna; Miladinov, Marko; Wang, Albert; Baillie, Rebecca; Bartlett, Derek; Reed, Mike; Leil, Tarek A.

    2014-01-01

    Multiple software programs are available for designing and running large scale system-level pharmacology models used in the drug development process. Depending on the problem, scientists may be forced to use several modeling tools that could increase model development time, IT costs and so on. Therefore, it is desirable to have a single platform that allows setting up and running large-scale simulations for the models that have been developed with different modeling tools. We developed a workflow and a software platform in which a model file is compiled into a self-contained executable that is no longer dependent on the software that was used to create the model. At the same time the full model specifics is preserved by presenting all model parameters as input parameters for the executable. This platform was implemented as a model agnostic, therapeutic area agnostic and web-based application with a database back-end that can be used to configure, manage and execute large-scale simulations for multiple models by multiple users. The user interface is designed to be easily configurable to reflect the specifics of the model and the user's particular needs and the back-end database has been implemented to store and manage all aspects of the systems, such as Models, Virtual Patients, User Interface Settings, and Results. The platform can be adapted and deployed on an existing cluster or cloud computing environment. Its use was demonstrated with a metabolic disease systems pharmacology model that simulates the effects of two antidiabetic drugs, metformin and fasiglifam, in type 2 diabetes mellitus patients. PMID:25374542

  20. Big Data Analytics Test Bed

    DTIC Science & Technology

    2013-09-01

    25 2. Backend Database Support ...............................................................25 3. Installing...29 A. SETUP VIRTUAL INFRASTRUCTURE ...................................................29 B...59 APPENDIX F. INSTALLING AND CONFIGURING BACKEND DATABASE SUPPORT FOR VCENTER

  1. Reactive Aggregate Model Protecting Against Real-Time Threats

    DTIC Science & Technology

    2014-09-01

    on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access

  2. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    DTIC Science & Technology

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  3. RASDR: Benchtop Demonstration of SDR for Radio Astronomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vacaliuc, Bogdan; Oxley, Paul; Fields, David

    The Society of Amateur Radio Astronomers (SARA) members present the benchtop version of RASDR, a Software Defined Radio (SDR) that is optimized for Radio Astronomy. RASDR has the potential to be a common digital receiver interface useful to many SARA members. This document describes the RASDR 0.0 , which provides digitized radio data to a backend computer through a USB 2.0 interface. A primary component of RASDR is the Lime Microsystems Femtocell chip which tunes from a 0.4-4 GHz center frequency with several selectable bandwidths from 0.75 MHz to 14 MHz. A second component is a board with a Complexmore » Programmable Logic Device (CPLD) chip that connects to the Femtocell and provides two USB connections to the backend computer. A third component is an analog balanced mixer up conversion section. Together these three components enable RASDR to tune from 0.015 MHz thru 3.8GHz of the radio frequency (RF) spectrum. We will demonstrate and discuss capabilities of the breadboard system and SARA members will be able to operate the unit hands-on throughout the workshop.« less

  4. Optimization guide for programs compiled under IBM FORTRAN H (OPT=2)

    NASA Technical Reports Server (NTRS)

    Smith, D. M.; Dobyns, A. H.; Marsh, H. M.

    1977-01-01

    Guidelines are given to provide the programmer with various techniques for optimizing programs when the FORTRAN IV H compiler is used with OPT=2. Subroutines and programs are described in the appendices along with a timing summary of all the examples given in the manual.

  5. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  6. A survey of compiler optimization techniques

    NASA Technical Reports Server (NTRS)

    Schneck, P. B.

    1972-01-01

    Major optimization techniques of compilers are described and grouped into three categories: machine dependent, architecture dependent, and architecture independent. Machine-dependent optimizations tend to be local and are performed upon short spans of generated code by using particular properties of an instruction set to reduce the time or space required by a program. Architecture-dependent optimizations are global and are performed while generating code. These optimizations consider the structure of a computer, but not its detailed instruction set. Architecture independent optimizations are also global but are based on analysis of the program flow graph and the dependencies among statements of source program. A conceptual review of a universal optimizer that performs architecture-independent optimizations at source-code level is also presented.

  7. Backend Control Processor for a Multi-Processor Relational Database Computer System.

    DTIC Science & Technology

    1984-12-01

    SCHOOL OF ENGI. UNCRSIFID MPONTIFF DEC 84 AFXT/GCS/ENG/84D-22 F/O 9/2 L ommhhhhmhhml mhhhommhhhhhm i-2 8 -- U0. 11111= Q. 2 111.8IIII- 1111111..6...THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technology Air University In Partial Fulfillment of the...development of a Backend Multi-Processor Relational Database Computer System. This thesis addresses a single component of this system, the Backend Control

  8. Wide-bandwidth high-resolution search for extraterrestrial intelligence

    NASA Technical Reports Server (NTRS)

    Horowitz, Paul

    1992-01-01

    Research accomplished in the following areas is discussed: the antenna configuration; HEMT low-noise amplifiers; the downconverter; the Fast Fourier Transform Array; the backend array; and the backend and workstation.

  9. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    PubMed

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  10. ATDM LANL FleCSI: Topology and Execution Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergen, Benjamin Karl

    FleCSI is a compile-time configurable C++ framework designed to support multi-physics application development. As such, FleCSI attempts to provide a very general set of infrastructure design patterns that can be specialized and extended to suit the needs of a broad variety of solver and data requirements. This means that FleCSI is potentially useful to many different ECP projects. Current support includes multidimensional mesh topology, mesh geometry, and mesh adjacency information, n-dimensional hashed-tree data structures, graph partitioning interfaces, and dependency closures (to identify data dependencies between distributed-memory address spaces). FleCSI introduces a functional programming model with control, execution, and data abstractionsmore » that are consistent with state-of-the-art task-based runtimes such as Legion and Charm++. The model also provides support for fine-grained, data-parallel execution with backend support for runtimes such as OpenMP and C++17. The FleCSI abstraction layer provides the developer with insulation from the underlying runtimes, while allowing support for multiple runtime systems, including conventional models like asynchronous MPI. The intent is to give developers a concrete set of user-friendly programming tools that can be used now, while allowing flexibility in choosing runtime implementations and optimizations that can be applied to architectures and runtimes that arise in the future. This project is essential to the ECP Ristra Next-Generation Code project, part of ASC ATDM, because it provides a hierarchically parallel programming model that is consistent with the design of modern system architectures, but which allows for the straightforward expression of algorithmic parallelism in a portably performant manner.« less

  11. Kokkos GPU Compiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Nicholas

    The Kokkos Clang compiler is a version of the Clang C++ compiler that has been modified to perform targeted code generation for Kokkos constructs in the goal of generating highly optimized code and to provide semantic (domain) awareness throughout the compilation toolchain of these constructs such as parallel for and parallel reduce. This approach is taken to explore the possibilities of exposing the developer’s intentions to the underlying compiler infrastructure (e.g. optimization and analysis passes within the middle stages of the compiler) instead of relying solely on the restricted capabilities of C++ template metaprogramming. To date our current activities havemore » focused on correct GPU code generation and thus we have not yet focused on improving overall performance. The compiler is implemented by recognizing specific (syntactic) Kokkos constructs in order to bypass normal template expansion mechanisms and instead use the semantic knowledge of Kokkos to directly generate code in the compiler’s intermediate representation (IR); which is then translated into an NVIDIA-centric GPU program and supporting runtime calls. In addition, by capturing and maintaining the higher-level semantics of Kokkos directly within the lower levels of the compiler has the potential for significantly improving the ability of the compiler to communicate with the developer in the terms of their original programming model/semantics.« less

  12. 40 CFR 63.493 - Back-end process provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutant Emissions: Group I Polymers and Resins § 63.493 Back-end process provisions. Owners and operators of new and existing affected sources shall comply with the requirements in...

  13. DARMA v. Beta 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollman, David; Lifflander, Jonathon; Wilke, Jeremiah

    2017-03-14

    DARMA is a portability layer for asynchronous many-task (AMT) runtime systems. AMT runtime systems show promise to mitigate challenges imposed by next generation high performance computing architectures. However, current runtime system technologies are not production-ready. DARMA is a portability layer that seeks to insulate application developers from idiosyncrasies of individual runtime systems, thereby facilitating application-developer use of these technologies. DARMA comprises a frontend application programming interface (API) for application developers, a backend API for runtime system developers, and a translation that translates frontend API calls into backend API calls. Application developers use C++ abstractions to annotate both data and tasksmore » in their code. The DARMA translation layer uses C++ template metaprogramming to capture data-task dependencies, and provides this information to a potential backend runtime system via a series of backend API calls.« less

  14. Final report: Compiled MPI. Cost-Effective Exascale Application Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, William Douglas

    2015-12-21

    This is the final report on Compiled MPI: Cost-Effective Exascale Application Development, and summarizes the results under this project. The project investigated runtime enviroments that improve the performance of MPI (Message-Passing Interface) programs; work at Illinois in the last period of this project looked at optimizing data access optimizations expressed with MPI datatypes.

  15. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    PubMed Central

    Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.

    2012-01-01

    Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238

  16. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    PubMed

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  17. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  18. On Fusing Recursive Traversals of K-d Trees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less

  19. Development of management information system for land in mine area based on MapInfo

    NASA Astrophysics Data System (ADS)

    Wang, Shi-Dong; Liu, Chuang-Hua; Wang, Xin-Chuang; Pan, Yan-Yu

    2008-10-01

    MapInfo is current a popular GIS software. This paper introduces characters of MapInfo and GIS second development methods offered by MapInfo, which include three ones based on MapBasic, OLE automation, and MapX control usage respectively. Taking development of land management information system in mine area for example, in the paper, the method of developing GIS applications based on MapX has been discussed, as well as development of land management information system in mine area has been introduced in detail, including development environment, overall design, design and realization of every function module, and simple application of system, etc. The system uses MapX 5.0 and Visual Basic 6.0 as development platform, takes SQL Server 2005 as back-end database, and adopts Matlab 6.5 to calculate number in back-end. On the basis of integrated design, the system develops eight modules including start-up, layer control, spatial query, spatial analysis, data editing, application model, document management, results output. The system can be used in mine area for cadastral management, land use structure optimization, land reclamation, land evaluation, analysis and forecasting for land in mine area and environmental disruption, thematic mapping, and so on.

  20. Snowflake: A Lightweight Portable Stencil DSL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Nathan; Driscoll, Michael; Markley, Charles

    Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less

  1. Snowflake: A Lightweight Portable Stencil DSL

    DOE PAGES

    Zhang, Nathan; Driscoll, Michael; Markley, Charles; ...

    2017-05-01

    Stencil computations are not well optimized by general-purpose production compilers and the increased use of multicore, manycore, and accelerator-based systems makes the optimization problem even more challenging. In this paper we present Snowflake, a Domain Specific Language (DSL) for stencils that uses a 'micro-compiler' approach, i.e., small, focused, domain-specific code generators. The approach is similar to that used in image processing stencils, but Snowflake handles the much more complex stencils that arise in scientific computing, including complex boundary conditions, higher-order operators (larger stencils), higher dimensions, variable coefficients, non-unit-stride iteration spaces, and multiple input or output meshes. Snowflake is embedded inmore » the Python language, allowing it to interoperate with popular scientific tools like SciPy and iPython; it also takes advantage of built-in Python libraries for powerful dependence analysis as part of a just-in-time compiler. We demonstrate the power of the Snowflake language and the micro-compiler approach with a complex scientific benchmark, HPGMG, that exercises the generality of stencil support in Snowflake. By generating OpenMP comparable to, and OpenCL within a factor of 2x of hand-optimized HPGMG, Snowflake demonstrates that a micro-compiler can support diverse processor architectures and is performance-competitive whilst preserving a high-level Python implementation.« less

  2. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  3. Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    DOE PAGES

    Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...

    2013-01-01

    File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less

  4. Tele-healthcare for diabetes management: A low cost automatic approach.

    PubMed

    Benaissa, M; Malik, B; Kanakis, A; Wright, N P

    2012-01-01

    In this paper, a telemedicine system for managing diabetic patients with better care is presented. The system is an end to end solution which relies on the integration of front end (patient unit) and backend web server. A key feature of the system developed is the very low cost automated approach. The front-end of the system is capable of reading glucose measurements from any glucose meter and sending them automatically via existing networks to the back-end server. The back-end is designed and developed using n-tier web client architecture based on model-view-controller design pattern using open source technology, a cost effective solution. The back-end helps the health-care provider with data analysis; data visualization and decision support, and allows them to send feedback and therapeutic advice to patients from anywhere using a browser enabled device. This system will be evaluated during the trials which will be conducted in collaboration with a local hospital in phased manner.

  5. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  6. HOPE: Just-in-time Python compiler for astrophysical computations

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Gamper, Lukas; Amara, Adam; Refregier, Alexandre

    2014-11-01

    HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.

  7. Optimization technique of wavefront coding system based on ZEMAX externally compiled programs

    NASA Astrophysics Data System (ADS)

    Han, Libo; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2016-10-01

    Wavefront coding technique as a means of athermalization applied to infrared imaging system, the design of phase plate is the key to system performance. This paper apply the externally compiled programs of ZEMAX to the optimization of phase mask in the normal optical design process, namely defining the evaluation function of wavefront coding system based on the consistency of modulation transfer function (MTF) and improving the speed of optimization by means of the introduction of the mathematical software. User write an external program which computes the evaluation function on account of the powerful computing feature of the mathematical software in order to find the optimal parameters of phase mask, and accelerate convergence through generic algorithm (GA), then use dynamic data exchange (DDE) interface between ZEMAX and mathematical software to realize high-speed data exchanging. The optimization of the rotational symmetric phase mask and the cubic phase mask have been completed by this method, the depth of focus increases nearly 3 times by inserting the rotational symmetric phase mask, while the other system with cubic phase mask can be increased to 10 times, the consistency of MTF decrease obviously, the maximum operating temperature of optimized system range between -40°-60°. Results show that this optimization method can be more convenient to define some unconventional optimization goals and fleetly to optimize optical system with special properties due to its externally compiled function and DDE, there will be greater significance for the optimization of unconventional optical system.

  8. Development of a Secure Mobile GPS Tracking and Management System

    ERIC Educational Resources Information Center

    Liu, Anyi

    2012-01-01

    With increasing demand of mobile devices and cloud computing, it becomes increasingly important to develop efficient mobile application and its secured backend, such as web applications and virtualization environment. This dissertation reports a systematic study of mobile application development and the security issues of its related backend. …

  9. GAMBIT: the global and modular beyond-the-standard-model inference tool. Addendum for GAMBIT 1.1: Mathematica backends, SUSYHD interface and updated likelihoods

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian

    2018-02-01

    In Ref. (GAMBIT Collaboration: Athron et. al., Eur. Phys. J. C. arXiv:1705.07908, 2017) we introduced the global-fitting framework GAMBIT. In this addendum, we describe a new minor version increment of this package. GAMBIT 1.1 includes full support for Mathematica backends, which we describe in some detail here. As an example, we backend SUSYHD (Vega and Villadoro, JHEP 07:159, 2015), which calculates the mass of the Higgs boson in the MSSM from effective field theory. We also describe updated likelihoods in PrecisionBit and DarkBit, and updated decay data included in DecayBit.

  10. Source-Constrained Recall: Front-End and Back-End Control of Retrieval Quality

    ERIC Educational Resources Information Center

    Halamish, Vered; Goldsmith, Morris; Jacoby, Larry L.

    2012-01-01

    Research on the strategic regulation of memory accuracy has focused primarily on monitoring and control processes used to edit out incorrect information after it is retrieved (back-end control). Recent studies, however, suggest that rememberers also enhance accuracy by preventing the retrieval of incorrect information in the first place (front-end…

  11. Automatic Compilation from High-Level Biologically-Oriented Programming Language to Genetic Regulatory Networks

    PubMed Central

    Beal, Jacob; Lu, Ting; Weiss, Ron

    2011-01-01

    Background The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. Methodology/Principal Findings To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes () and latency of the optimized engineered gene networks. Conclusions/Significance Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems. PMID:21850228

  12. Automatic compilation from high-level biologically-oriented programming language to genetic regulatory networks.

    PubMed

    Beal, Jacob; Lu, Ting; Weiss, Ron

    2011-01-01

    The field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry. To address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes (~ 50%) and latency of the optimized engineered gene networks. Our platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.

  13. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  14. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  15. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  16. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  17. 40 CFR Table 8 to Subpart U of... - Summary of Compliance Alternative Requirements for the Back-End Process Provisions

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Requirements for the Back-End Process Provisions 8 Table 8 to Subpart U of Part 63 Protection of Environment...: Group I Polymers and Resins Pt. 63, Subpt. U, Table 8 Table 8 to Subpart U of Part 63—Summary of... be monitored Requirements Compliance Using Stripping Technology, Demonstrated through Periodic...

  18. Back-end and interface implementation of the STS-XYTER2 prototype ASIC for the CBM experiment

    NASA Astrophysics Data System (ADS)

    Kasinski, K.; Szczygiel, R.; Zabolotny, W.

    2016-11-01

    Each front-end readout ASIC for the High-Energy Physics experiments requires robust and effective hit data streaming and control mechanism. A new STS-XYTER2 full-size prototype chip for the Silicon Tracking System and Muon Chamber detectors in the Compressed Baryonic Matter experiment at Facility for Antiproton and Ion Research (FAIR, Germany) is a 128-channel time and amplitude measuring solution for silicon microstrip and gas detectors. It operates at 250 kHit/s/channel hit rate, each hit producing 27 bits of information (5-bit amplitude, 14-bit timestamp, position and diagnostics data). The chip back-end implements fast front-end channel read-out, timestamp-wise hit sorting, and data streaming via a scalable interface implementing the dedicated protocol (STS-HCTSP) for chip control and hit transfer with data bandwidth from 9.7 MHit/s up to 47 MHit/s. It also includes multiple options for link diagnostics, failure detection, and throttling features. The back-end is designed to operate with the data acquisition architecture based on the CERN GBTx transceivers. This paper presents the details of the back-end and interface design and its implementation in the UMC 180 nm CMOS process.

  19. User interfaces for computational science: A domain specific language for OOMMF embedded in Python

    NASA Astrophysics Data System (ADS)

    Beg, Marijan; Pepper, Ryan A.; Fangohr, Hans

    2017-05-01

    Computer simulations are used widely across the engineering and science disciplines, including in the research and development of magnetic devices using computational micromagnetics. In this work, we identify and review different approaches to configuring simulation runs: (i) the re-compilation of source code, (ii) the use of configuration files, (iii) the graphical user interface, and (iv) embedding the simulation specification in an existing programming language to express the computational problem. We identify the advantages and disadvantages of different approaches and discuss their implications on effectiveness and reproducibility of computational studies and results. Following on from this, we design and describe a domain specific language for micromagnetics that is embedded in the Python language, and allows users to define the micromagnetic simulations they want to carry out in a flexible way. We have implemented this micromagnetic simulation description language together with a computational backend that executes the simulation task using the Object Oriented MicroMagnetic Framework (OOMMF). We illustrate the use of this Python interface for OOMMF by solving the micromagnetic standard problem 4. All the code is publicly available and is open source.

  20. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    PubMed

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  1. Optimizing Maintenance of Constraint-Based Database Caches

    NASA Astrophysics Data System (ADS)

    Klein, Joachim; Braun, Susanne

    Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.

  2. Compiler Optimization Pass Visualization: The Procedural Abstraction Case

    ERIC Educational Resources Information Center

    Schaeckeler, Stefan; Shang, Weijia; Davis, Ruth

    2009-01-01

    There is an active research community concentrating on visualizations of algorithms taught in CS1 and CS2 courses. These visualizations can help students to create concrete visual images of the algorithms and their underlying concepts. Not only "fundamental algorithms" can be visualized, but also algorithms used in compilers. Visualizations that…

  3. STEM Workforce Pipeline

    DTIC Science & Technology

    2013-07-30

    more about STEM. From museums, to gardens, to planetariums and more, Places to Go mobilizes people to explore the STEM resources offered by their...Works website was developed utilizing a phased approach. This approach allowed for informed, periodic updates to the structure, design, and backend ...our web development team, throughout this phase. A significant amount of backend development work on the website, as well as design work was completed

  4. Robotic Sensitive-Site Assessment

    DTIC Science & Technology

    2015-09-04

    annotations. The SOA component is the backend infrastructure that receives and stores robot-generated and human-input data and serves these data to several...Architecture Server (heading level 2) The SOA server provides the backend infrastructure to receive data from robot situational awareness payloads, to archive...incapacitation or even death. The proper use of PPE is critical to avoiding exposure. However, wearing PPE limits mobility and field of vision, and

  5. A Note on Compiling Fortran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busby, L. E.

    Fortran modules tend to serialize compilation of large Fortran projects, by introducing dependencies among the source files. If file A depends on file B, (A uses a module defined by B), you must finish compiling B before you can begin compiling A. Some Fortran compilers (Intel ifort, GNU gfortran and IBM xlf, at least) offer an option to ‘‘verify syntax’’, with the side effect of also producing any associated Fortran module files. As it happens, this option usually runs much faster than the object code generation and optimization phases. For some projects on some machines, it can be advantageous tomore » compile in two passes: The first pass generates the module files, quickly; the second pass produces the object files, in parallel. We achieve a 3.8× speedup in the case study below.« less

  6. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  7. OpenSimulator Interoperability with DRDC Simulation Tools: Compatibility Study

    DTIC Science & Technology

    2014-09-01

    into two components: (1) backend data services consisting of user accounts, login service, assets, and inventory; and (2) the simulator server which...components are combined into a single OpenSimulator process. In grid mode, the two components are separated, placing the backend services into a ROBUST... mobile devices. Potential points of compatibility between Unity and OpenSimulator include: a Unity-based desktop computer OpenSimulator viewer; a

  8. A domain-specific compiler for a parallel multiresolution adaptive numerical simulation environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram

    This paper describes the design and implementation of a layered domain-specific compiler to support MADNESS---Multiresolution ADaptive Numerical Environment for Scientific Simulation. MADNESS is a high-level software environment for the solution of integral and differential equations in many dimensions, using adaptive and fast harmonic analysis methods with guaranteed precision. MADNESS uses k-d trees to represent spatial functions and implements operators like addition, multiplication, differentiation, and integration on the numerical representation of functions. The MADNESS runtime system provides global namespace support and a task-based execution model including futures. MADNESS is currently deployed on massively parallel supercomputers and has enabled many science advances.more » Due to the highly irregular and statically unpredictable structure of the k-d trees representing the spatial functions encountered in MADNESS applications, only purely runtime approaches to optimization have previously been implemented in the MADNESS framework. This paper describes a layered domain-specific compiler developed to address some performance bottlenecks in MADNESS. The newly developed static compile-time optimizations, in conjunction with the MADNESS runtime support, enable significant performance improvement for the MADNESS framework.« less

  9. Livermore Compiler Analysis Loop Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hornung, R. D.

    2013-03-01

    LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermoremore » Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less

  10. Supporting openEHR Java desktop application developers.

    PubMed

    Kashfi, Hajar; Torgersson, Olof

    2011-01-01

    The openEHR community suggests that an appropriate approach for creating a graphical user interface for an openEHR-based application is to generate forms from the underlying archetypes and templates. However, current generation techniques are not mature enough to be able to produce high quality interfaces with good usability. Therefore, developing efficient ways to combine manually designed and developed interfaces to openEHR backends is an interesting alternative. In this study, a framework for binding a pre-designed graphical user interface to an openEHR-based backend is proposed. The proposed framework contributes to the set of options available for developers. In particular we believe that the approach of combining user interface components with an openEHR backend in the proposed way might be useful in situations where the quality of the user interface is essential and for creating small scale and experimental systems.

  11. Tag ID Subdivision Scheme for Efficient Authentication and Security-Enhancement of RFID System in USN

    NASA Astrophysics Data System (ADS)

    Lee, Kijeong; Park, Byungjoo; Park, Gil-Cheol

    Radio frequency identification (RFID) is a generic term that is used to describe a system that transmits the identity (in the form of a unique serial number) of an object or person wirelessly, using radio waves. However, there are security threats in the RFID system related to its technical components. For example, illegal RFID tag readers can read tag ID and recognize most RFID Readers, a security threat that needs in-depth attention. Previous studies show some ideas on how to minimize these security threats like studying the security protocols between tag, reader and Back-end DB. In this research, the team proposes an RFID Tag ID Subdivision Scheme to authenticate the permitted tag only in USN (Ubiquitous Sensor Network). Using the proposed scheme, the Back-end DB authenticates selected tags only to minimize security threats like eavesdropping and decreasing traffic in Back-end DB.

  12. Metal stack optimization for low-power and high-density for N7-N5

    NASA Astrophysics Data System (ADS)

    Raghavan, P.; Firouzi, F.; Matti, L.; Debacker, P.; Baert, R.; Sherazi, S. M. Y.; Trivkovic, D.; Gerousis, V.; Dusa, M.; Ryckaert, J.; Tokei, Z.; Verkest, D.; McIntyre, G.; Ronse, K.

    2016-03-01

    One of the key challenges while scaling logic down to N7 and N5 is the requirement of self-aligned multiple patterning for the metal stack. This comes with a large cost of the backend cost and therefore a careful stack optimization is required. Various layers in the stack have different purposes and therefore their choice of pitch and number of layers is critical. Furthermore, when in ultra scaled dimensions of N7 or N5, the number of patterning options are also much larger ranging from multiple LE, EUV to SADP/SAQP. The right choice of these are also needed patterning techniques that use a full grating of wires like SADP/SAQP techniques introduce high level of metal dummies into the design. This implies a large capacitance penalty to the design therefore having large performance and power penalties. This is often mitigated with extra masking strategies. This paper discusses a holistic view of metal stack optimization from standard cell level all the way to routing and the corresponding trade-off that exist for this space.

  13. An Optimizing Compiler for Petascale I/O on Leadership-Class Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandemir, Mahmut Taylan; Choudary, Alok; Thakur, Rajeev

    In high-performance computing (HPC), parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our DOE project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final reportmore » summarizes the major achievements of the project and also points out promising future directions Two new sections in this report compared to the previous report are IOGenie and SSD/NVM-specific optimizations.« less

  14. The MeqTrees software system and its use for third-generation calibration of radio interferometers

    NASA Astrophysics Data System (ADS)

    Noordam, J. E.; Smirnov, O. M.

    2010-12-01

    Context. The formulation of the radio interferometer measurement equation (RIME) for a generic radio telescope by Hamaker et al. has provided us with an elegant mathematical apparatus for better understanding, simulation and calibration of existing and future instruments. The calibration of the new radio telescopes (LOFAR, SKA) would be unthinkable without the RIME formalism, and new software to exploit it. Aims: The MeqTrees software system is designed to implement numerical models, and to solve for arbitrary subsets of their parameters. It may be applied to many problems, but was originally geared towards implementing Measurement Equations in radio astronomy for the purposes of simulation and calibration. The technical goal of MeqTrees is to provide a tool for rapid implementation of such models, while offering performance comparable to hand-written code. We are also pursuing the wider goal of increasing the rate of evolution of radio astronomical software, by offering a tool that facilitates rapid experimentation, and exchange of ideas (and scripts). Methods: MeqTrees is implemented as a Python-based front-end called the meqbrowser, and an efficient (C++-based) computational back-end called the meqserver. Numerical models are defined on the front-end via a Python-based Tree Definition Language (TDL), then rapidly executed on the back-end. The use of TDL facilitates an extremely short turn-around time (hours rather than weeks or months) for experimentation with new ideas. This is also helped by unprecedented visualization capabilities for all final and intermediate results. A flexible data model and a number of important optimizations in the back-end ensures that the numerical performance is comparable to that of hand-written code. Results: MeqTrees is already widely used as the simulation tool for new instruments (LOFAR, SKA) and technologies (focal plane arrays). It has demonstrated that it can achieve a noise-limited dynamic range in excess of a million, on WSRT data. It is the only package that is specifically designed to handle what we propose to call third-generation calibration (3GC), which is needed for the new generation of giant radio telescopes, but can also improve the calibration of existing instruments.

  15. Optimizing python-based ROOT I/O with PyPy's tracing just-in-time compiler

    NASA Astrophysics Data System (ADS)

    Tlp Lavrijsen, Wim

    2012-12-01

    The Python programming language allows objects and classes to respond dynamically to the execution environment. Most of this, however, is made possible through language hooks which by definition can not be optimized and thus tend to be slow. The PyPy implementation of Python includes a tracing just in time compiler (JIT), which allows similar dynamic responses but at the interpreter-, rather than the application-level. Therefore, it is possible to fully remove the hooks, leaving only the dynamic response, in the optimization stage for hot loops, if the types of interest are opened up to the JIT. A general opening up of types to the JIT, based on reflection information, has already been developed (cppyy). The work described in this paper takes it one step further by customizing access to ROOT I/O to the JIT, allowing for fully automatic optimizations.

  16. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott

    2012-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the GPU accelerator compiler directives. We have implemented the GPU acceleration on a Core I7 gaming PC with a NVIDIA GTX 580 GPU. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. Optimization strategies and comparisons between DIRAC and the gaming PC will be presented. We will also discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  17. Information Collection using Handheld Devices in Unreliable Networking Environments

    DTIC Science & Technology

    2014-06-01

    different types of mobile devices that connect wirelessly to a database 8 server. The actual backend database is not important to the mobile clients...Google’s infrastructure and local servers with MySQL and PostgreSQL on the backend (ODK 2014b). (2) Google Fusion Tables are used to do basic link...how we conduct business. Our requirements to share information do not change simply because there is little or no existing infrastructure in our

  18. Catalytic Ignition and Upstream Reaction Propagation in Monolith Reactors

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Miller, Fletcher J.; T'ien, James S.

    2007-01-01

    Using numerical simulations, this work demonstrates a concept called back-end ignition for lighting-off and pre-heating a catalytic monolith in a power generation system. In this concept, a downstream heat source (e.g. a flame) or resistive heating in the downstream portion of the monolith initiates a localized catalytic reaction which subsequently propagates upstream and heats the entire monolith. The simulations used a transient numerical model of a single catalytic channel which characterizes the behavior of the entire monolith. The model treats both the gas and solid phases and includes detailed homogeneous and heterogeneous reactions. An important parameter in the model for back-end ignition is upstream heat conduction along the solid. The simulations used both dry and wet CO chemistry as a model fuel for the proof-of-concept calculations; the presence of water vapor can trigger homogenous reactions, provided that gas-phase temperatures are adequately high and there is sufficient fuel remaining after surface reactions. With sufficiently high inlet equivalence ratio, back-end ignition occurs using the thermophysical properties of both a ceramic and metal monolith (coated with platinum in both cases), with the heat-up times significantly faster for the metal monolith. For lower equivalence ratios, back-end ignition occurs without upstream propagation. Once light-off and propagation occur, the inlet equivalence ratio could be reduced significantly while still maintaining an ignited monolith as demonstrated by calculations using complete monolith heating.

  19. VEGAS: VErsatile GBT Astronomical Spectrometer

    NASA Astrophysics Data System (ADS)

    Bussa, Srikanth; VEGAS Development Team

    2012-01-01

    The National Science Foundation Advanced Technologies and Instrumentation (NSF-ATI) program is funding a new spectrometer backend for the Green Bank Telescope (GBT). This spectrometer is being built by the CICADA collaboration - collaboration between the National Radio Astronomy Observatory (NRAO) and the Center for Astronomy Signal Processing and Electronics Research (CASPER) at the University of California Berkeley.The backend is named as VErsatile GBT Astronomical Spectrometer (VEGAS) and will replace the capabilities of the existing spectrometers. This backend supports data processing from focal plane array systems. The spectrometer will be capable of processing up to 1.25 GHz bandwidth from 8 dual polarized beams or a bandwidth up to 10 GHz from a dual polarized beam.The spectrometer will be using 8-bit analog to digital converters (ADC), which gives a better dynamic range than existing GBT spectrometers. There will be 8 tunable digital sub-bands within the 1.25 GHz bandwidth, which will enhance the capability of simultaneous observation of multiple spectral transitions. The maximum spectral dump rate to disk will be about 0.5 msec. The vastly enhanced backend capabilities will support several science projects with the GBT. The projects include mapping temperature and density structure of molecular clouds; searches for organic molecules in the interstellar medium; determination of the fundamental constants of our evolving Universe; red-shifted spectral features from galaxies across cosmic time and survey for pulsars in the extreme gravitational environment of the Galactic Center.

  20. SDAI: a key piece of software to manage the new wideband backend at Robledo

    NASA Astrophysics Data System (ADS)

    Rizzo, J. R.; Gutiérrez Bustos, M.; Kuiper, T. B. H.; Cernicharo, J.; Sotuela, I.; Pedreira, A.

    2012-09-01

    A joint collaborative project was recently developed to provide the Madrid Deep Space Communications Complex with a state-of-the-art wideband backend. This new backend provides from 100MHz to 6 GHz of instantaneous bandwidth, and spectral resolutions from 6 to 200 kHz. The backend includes a new intermediate-frequency processor, as well as a FPGA-based FFT spectrometer, which manage thousands of spectroscopic channels in real time. All these equipment need to be controlled and operated by a common software, which has to synchronize activities among affected devices, and also with the observing program. The final output should be a calibrated spectrum, readable by standard radio astronomical tools for further processing. The developed software at this end is named "Spectroscopic Data Acquisition Interface" (SDAI). SDAI is written in python 2.5, using PyQt4 for the User Interface. By an ethernet socket connection, SDAI receives astronomical information (source, frequencies, Doppler correction, etc.) and the antenna status from the observing program. Then it synchronizes the observations at the required frequency by tuning the synthesizers through their USB ports; finally SDAI controls the FFT spectrometers through UDP commands sent by sockets. Data are transmitted from the FFT spectrometers by TCP sockets, and written as standard FITS files. In this paper we describe the modules built, depict a typical observing session, and show some astronomical results using SDAI.

  1. Software Issues at the User Interface

    DTIC Science & Technology

    1991-05-01

    successful integration of parallel computers into mainstream scientific computing. Clearly a compiler is the most important software tool available to a...Computer Science University of Colorado Boulder, CO 80309 ABSTRACT We review software issues that are critical to the successful integration of parallel...The development of an optimizing compiler of this quality, addressing communicaton instructions as well as computational instructions is a major

  2. A New Data Acquisition Portal for the Sacramento River Settlement Contractors

    NASA Astrophysics Data System (ADS)

    Narlesky, P. E., C. A.; Williams, P. E., A. M.

    2017-12-01

    In 1964, the United States Bureau of Reclamation (Reclamation) executed settlement contracts with the Sacramento River Settlement Contractors (SRSC), entities which hold water rights along the Sacramento River with area of origin protection or that are senior to Reclamation's water rights for Shasta Reservoir. Shasta is the cornerstone of the federal Central Valley Project (CVP), one of the nation's largest multi-purpose water conservation programs. In order to optimize CVP operations for multiple beneficial uses including water supply, fisheries, water quality, and waterfowl habitat, the SRSC voluntarily agreed to adaptively manage diversions throughout the year in close coordination with Reclamation. MBK Engineers assists the SRSC throughout this process by collecting, organizing, compiling, and distributing diversion data to Reclamation and others involved in operational decisions related to Shasta Reservoir and the CVP. To improve and expand participation in diversions reporting, we have developed the SRSC Web Portal, which launches a data-entry dashboard for members of the SRSC to facilitate recording and transmittal of both predicted and observed monthly and daily flow diversion data. This cloud-hosted system leverages a combination of Javascript interactive visualization libraries with a database-backed Python web framework to present streamlined data-entry forms and valuable SRSC program summary illustrations. SRSC program totals, which can now be aggregated through queries to the web-app's database backend, are used by Reclamation, SRSC, fish agencies, and others to inform operational decisions. By submitting diversion schedules and tracking actual diversions through the portal, contractors will also be directly contributing to the development of a richer and more consistently-formatted historical record for demand hydrology in the Sacramento River Watershed; this may be useful in future water supply studies. Adoption of this technology will foster an increased appreciation for the historical record of individual and combined Sacramento River diversions relative to the overall system.

  3. Design and Implementation of A Backend Multiple-Processor Relational Data Base Computer System.

    DTIC Science & Technology

    1981-12-01

    propogated to other parts of the data base. 18 Cost. As mentioned earlier, a primary motivation for the backend DBMS work is the development of an...uniquely identify the n- tuples of the relation is called the primary key. For example, in Figure 3, the primary key is NUMBER. A primary key is said to...identifying the tuple. For example, in Figure 3, (NUMBER,TITLE) would not be a nonredundant primary key for COURSE. A relation can contain more than one

  4. VLBI Digital-Backend Intercomparison Test Report

    NASA Technical Reports Server (NTRS)

    Whitney, Alan; Beaudoin, Christopher; Cappallo, Roger; Niell, Arthur; Petrachenko, Bill; Ruszczyk, Chester A.; Titus, Mike

    2013-01-01

    Issues related to digital-backend (DBE) systems can be difficult to evaluate in either local tests or actual VLBI experiments. The 2nd DBE intercomparison workshop at Haystack Observatory on 25-26 October 2012 provided a forum to explicitly address validation and interoperability issues among independent global developers of DBE equipment. This special report discusses the workshop. It identifies DBE systems that were tested at the workshop, describes the test objectives and procedures, and reports and discusses the results of the testing.

  5. Robust Speech Processing & Recognition: Speaker ID, Language ID, Speech Recognition/Keyword Spotting, Diarization/Co-Channel/Environmental Characterization, Speaker State Assessment

    DTIC Science & Technology

    2015-10-01

    Scoring, Gaussian Backend , etc.) as shown in Fig. 39. The methods in this domain also emphasized the ability to perform data purification for both...investigation using the same infrastructure was undertaken to explore Lombard effect “flavor” detection for improved speaker ID. The study The presence of...dimension selection and compared to a common N-gram frequency based selection. 2.1.2: Exploration on NN/DBN backend : Since Deep Neural Networks (DNN) have

  6. Asymmetric Multilevel Outphasing (AMO): A New Architecture for All-Silicon mm-Wave Transmitter ICs

    DTIC Science & Technology

    2015-06-12

    power-amplifiers for mobile basestation infrastructure and handsets. NanoSemi Inc. designs linearization solutions for analog front-ends such as...ward flexible, multi-standard radio chips, increases the need for high-precision, high-throughput and energy-efficient backend processing. The desire...peak PAE is affected by less than 1% (46 mW/(46 mW 1.8 W/0.4)) by this 64-QAM capable AMO SCS backend . 378 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 48

  7. An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data

    DTIC Science & Technology

    2016-01-01

    these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation

  8. Power-Aware Compiler Controllable Chip Multiprocessor

    NASA Astrophysics Data System (ADS)

    Shikano, Hiroaki; Shirako, Jun; Wada, Yasutaka; Kimura, Keiji; Kasahara, Hironori

    A power-aware compiler controllable chip multiprocessor (CMP) is presented and its performance and power consumption are evaluated with the optimally scheduled advanced multiprocessor (OSCAR) parallelizing compiler. The CMP is equipped with power control registers that change clock frequency and power supply voltage to functional units including processor cores, memories, and an interconnection network. The OSCAR compiler carries out coarse-grain task parallelization of programs and reduces power consumption using architectural power control support and the compiler's power saving scheme. The performance evaluation shows that MPEG-2 encoding on the proposed CMP with four CPUs results in 82.6% power reduction in real-time execution mode with a deadline constraint on its sequential execution time. Furthermore, MP3 encoding on a heterogeneous CMP with four CPUs and four accelerators results in 53.9% power reduction at 21.1-fold speed-up in performance against its sequential execution in the fastest execution mode.

  9. Visualization for Hyper-Heuristics. Front-End Graphical User Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroenung, Lauren

    Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. While such automated design has great advantages, it can often be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address thesemore » issues of usability by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics to support practitioners, as well as scientific visualization of the produced automated designs. My contributions to this project are exhibited in the user-facing portion of the developed system and the detailed scientific visualizations created from back-end data.« less

  10. Visualization for Hyper-Heuristics: Back-End Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Luke

    Modern society is faced with increasingly complex problems, many of which can be formulated as generate-and-test optimization problems. Yet, general-purpose optimization algorithms may sometimes require too much computational time. In these instances, hyperheuristics may be used. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario, finding the solution significantly faster than its predecessor. However, it may be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address these issues by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics and an easy-to-understand scientific visualizationmore » for the produced solutions. To support the development of this GUI, my portion of the research involved developing algorithms that would allow for parsing of the data produced by the hyper-heuristics. This data would then be sent to the front-end, where it would be displayed to the end user.« less

  11. On algorithmic optimization of histogramming functions for GEM systems

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Poźniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech

    2015-09-01

    This article concerns optimization methods for data analysis for the X-ray GEM detector system. The offline analysis of collected samples was optimized for MATLAB computations. Compiled functions in C language were used with MEX library. Significant speedup was received for both ordering-preprocessing and for histogramming of samples. Utilized techniques with obtained results are presented.

  12. Ada Compiler Validation Summary Report. Certificate Number: 920918S1. 11274 U.S. Navy Ada/M, Version 4.5 (/NO OPTIMIZE) VAX 8550/8600/8650 (Cluster) = Enhanced Processor (EP) AN/UYK-44 (Bare Board)

    DTIC Science & Technology

    1992-10-27

    Institute of Standards and Technology Gaithersburg, MD USA 1 ELECTE I= 7 . PERFORMING ORGANIZATION NAME(S) AND ADDRESS(E JUN 3 1993U . , PERFORMING...Standard [Ada83) using the current Ada Compiler Validation Capability (ACVC). This Validation Summary Report ( VSR ) gives an account of the testing of... 7 - Control Part (Redirection) Options F.14 Compiler Options F-59 LINKER OPTIONS The linker options of this Ada implementation, as described inl this

  13. BBIS: Beacon Bus Information System

    NASA Astrophysics Data System (ADS)

    Kasim, Shahreen; Hafit, Hanayanti; Pei Juin, Kong; Afizah Afif, Zehan; Hashim, Rathiah; Ruslai, Husni; Jahidin, Kamaruzzaman; Syafwan Arshad, Mohammad

    2016-11-01

    Lack of bus information for example bus timetable, status of the bus and messy advertisement on bulletin board at the bus stop will give negative impact to tourist. Therefore, a real-time update bus information bulletin board provides all information needed so that passengers can save their bus information searching time. Supported with Android or iOS, Beacon Bus Information System (BBIS) provides bus information between Batu Pahat and Kluang area. BBIS is a system that implements physical web technology and interaction on demand. It built on Backend-as-a-Service, a cloud solution and Firebase non relational database as data persistence backend and syncs between user client in the real-time. People walk through bus stop with smart device and do not require any application. Bluetooth Beacon is used to achieve smart device's best performance of data sharing. Intellij IDEA 15 is one of the tools that that used to develop the BBIS system. Multi-language included front end and backend supported Integration development environment (IDE) helped to speed up integration process.

  14. Integrating RFID technique to design mobile handheld inventory management system

    NASA Astrophysics Data System (ADS)

    Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung

    2008-04-01

    An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.

  15. Medical Optimization Network for Space Telemedicine Resources

    NASA Technical Reports Server (NTRS)

    Rubin, D.; Shah, R. V.; Kerstman, E. L.; Reyes, D.; Mulcahy, R.; Antonsen, E.

    2017-01-01

    INTRODUCTION: Long-duration missions beyond low Earth orbit introduce new constraints to the space medical system. Beyond the traditional limitations in mass, power, and volume, consideration must be given to other factors such as the inability to evacuate to Earth, communication delays, and limitations in clinical skillsets. As NASA develops the medical system for an exploration mission, it must have an ability to evaluate the trade space of what resources will be most important. The Medical Optimization Network for Space Telemedicine Resources (MONSTR) was developed over the past year for this reason, and is now a system for managing data pertaining to medical resources and their relative importance when addressing medical conditions. METHODS: The MONSTR web application with a Microsoft SQL database backend was developed and made accessible to Tableau v9.3 for analysis and visualization. The database was initially populated with a list of medical conditions of concern for an exploration mission taken from the Integrated Medical Model (IMM), a probabilistic model designed to quantify in-flight medical risk. A team of physicians working within the Exploration Medical Capability Element of NASA's Human Research Program compiled a list diagnostic and treatment medical resources required to address best- and worst-case scenarios of each medical condition using a terrestrial standard of care and entered this data into the system. This list included both tangible resources (e.g. medical equipment, medications) and intangible resources (e.g. clinical skills required to perform a procedure). The physician team then assigned criticality values to each instance of a resource, representing the importance of that resource to diagnosing or treating its associated condition(s). Medical condition probabilities of occurrence during a Mars mission were pulled from the IMM and imported into the MONSTR database for use within a resource criticality-weighting algorithm. DISCUSSION: The MONSTR tool is a novel approach to assess the relative value of individual resources needed for the diagnosis and treatment of medical conditions. Future work will add resources for prevention and long term care of these conditions. Once data collection is complete, MONSTR will provide the operational and research communities at NASA with information to support informed decisions regarding areas of research investment, future crew training, and medical supplies manifested as part of any exploration medical system.

  16. OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software

    NASA Astrophysics Data System (ADS)

    Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.

    2006-12-01

    OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.

  17. Interactive Voice/Web Response System in clinical research

    PubMed Central

    Ruikar, Vrishabhsagar

    2016-01-01

    Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc. PMID:26952178

  18. Interactive Voice/Web Response System in clinical research.

    PubMed

    Ruikar, Vrishabhsagar

    2016-01-01

    Emerging technologies in computer and telecommunication industry has eased the access to computer through telephone. An Interactive Voice/Web Response System (IxRS) is one of the user friendly systems for end users, with complex and tailored programs at its backend. The backend programs are specially tailored for easy understanding of users. Clinical research industry has experienced revolution in methodologies of data capture with time. Different systems have evolved toward emerging modern technologies and tools in couple of decades from past, for example, Electronic Data Capture, IxRS, electronic patient reported outcomes, etc.

  19. High-Speed, Low-Cost Workstation for Computation-Intensive Statistics. Phase 1

    DTIC Science & Technology

    1990-06-20

    routine implementation and performance. 5 The two compiled versions given in the table were coded in an attempt to obtain an optimized compiled version...level statistics and linear algebra routines (BSAS and BLAS) that have been prototyped in this study. For each routine, both the C code ( Turbo C...OISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Unlimited distribution 13. ABSTRACT (Maximum 200 words) High-performance and low-cost

  20. Automated Work Package: Conceptual Design and Data Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al Rashdan, Ahmad; Oxstrand, Johanna; Agarwal, Vivek

    The automated work package (AWP) is one of the U.S. Department of Energy’s (DOE) Light Water Reactor Sustainability Program efforts to enhance the safety and economics of the nuclear power industry. An AWP is an adaptive and interactive work package that intelligently drives the work process according to the plant condition, resources status, and users progress. The AWP aims to automate several manual tasks of the work process to enhance human performance and reduce human errors. Electronic work packages (eWPs), studied by the Electric Power Research Institute (EPRI), are work packages that rely to various extent on electronic data processingmore » and presentation. AWPs are the future of eWPs. They are envisioned to incorporate the advanced technologies of the future, and thus address the unresolved deficiencies associated with the eWPs in a nuclear power plant. In order to define the AWP, it is necessary to develop an ideal envisioned scenario of the future work process without any current technology restriction. The approach followed to develop this scenario is specific to every stage of the work process execution. The scenario development resulted in fifty advanced functionalities that can be part of the AWP. To rank the importance of these functionalities, a survey was conducted involving several U.S. nuclear utilities. The survey aimed at determining the current need of the nuclear industry with respect to the current work process, i.e. what the industry is satisfied with, and where the industry envisions potential for improvement. The survey evaluated the most promising functionalities resulting from the scenario development. The results demonstrated a significant desire to adopt the majority of these functionalities. The results of the survey are expected to drive the Idaho National Laboratory (INL) AWP research and development (R&D). In order to facilitate this mission, a prototype AWP is needed. Since the vast majority of earlier efforts focused on the frontend aspects of the AWP, the backend data architecture was researched and developed in this effort. The backend design involved data architecture aspects. It was realized through this effort that the key aspects of this design are hierarchy, data configuration and live information, data templates and instances, the flow of work package execution, the introduction of properties, and the means to interface the backend to the frontend. After the backend design was developed, a data structure was built to reflect the developed data architecture. The data structure was developed to accommodate the fifty functionalities identified by the envisioned scenario development. The data structure was evaluated by incorporating an example work order from the nuclear power industry. The implementation resulted in several optimization iterations of the data structure. In addition, the rearrangement of the work order information to fit the data structure highlighted several possibilities for improvement in the current work order design, and significantly reduced the size of the work order.« less

  1. Research interface on a programmable ultrasound scanner.

    PubMed

    Shamdasani, Vijay; Bae, Unmin; Sikdar, Siddhartha; Yoo, Yang Mo; Karadayi, Kerem; Managuli, Ravi; Kim, Yongmin

    2008-07-01

    Commercial ultrasound machines in the past did not provide the ultrasound researchers access to raw ultrasound data. Lack of this ability has impeded evaluation and clinical testing of novel ultrasound algorithms and applications. Recently, we developed a flexible ultrasound back-end where all the processing for the conventional ultrasound modes, such as B, M, color flow and spectral Doppler, was performed in software. The back-end has been incorporated into a commercial ultrasound machine, the Hitachi HiVision 5500. The goal of this work is to develop an ultrasound research interface on the back-end for acquiring raw ultrasound data from the machine. The research interface has been designed as a software module on the ultrasound back-end. To increase the amount of raw ultrasound data that can be spooled in the limited memory available on the back-end, we have developed a method that can losslessly compress the ultrasound data in real time. The raw ultrasound data could be obtained in any conventional ultrasound mode, including duplex and triplex modes. Furthermore, use of the research interface does not decrease the frame rate or otherwise affect the clinical usability of the machine. The lossless compression of the ultrasound data in real time can increase the amount of data spooled by approximately 2.3 times, thus allowing more than 6s of raw ultrasound data to be acquired in all the modes. The interface has been used not only for early testing of new ideas with in vitro data from phantoms, but also for acquiring in vivo data for fine-tuning ultrasound applications and conducting clinical studies. We present several examples of how newer ultrasound applications, such as elastography, vibration imaging and 3D imaging, have benefited from this research interface. Since the research interface is entirely implemented in software, it can be deployed on existing HiVision 5500 ultrasound machines and may be easily upgraded in the future. The developed research interface can aid researchers in the rapid testing and clinical evaluation of new ultrasound algorithms and applications. Additionally, we believe that our approach would be applicable to designing research interfaces on other ultrasound machines.

  2. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  3. The software-defined fast post-processing for GEM soft x-ray diagnostics in the Tungsten Environment in Steady-state Tokamak thermal fusion reactor

    NASA Astrophysics Data System (ADS)

    Krawczyk, Rafał Dominik; Czarski, Tomasz; Linczuk, Paweł; Wojeński, Andrzej; Kolasiński, Piotr; GÄ ska, Michał; Chernyshova, Maryna; Mazon, Didier; Jardin, Axel; Malard, Philippe; Poźniak, Krzysztof; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Kowalska-Strzeciwilk, Ewa; Malinowski, Karol

    2018-06-01

    This article presents a novel software-defined server-based solutions that were introduced in the fast, real-time computation systems for soft X-ray diagnostics for the WEST (Tungsten Environment in Steady-state Tokamak) reactor in Cadarache, France. The objective of the research was to provide a fast processing of data at high throughput and with low latencies for investigating the interplay between the particle transport and magnetohydrodynamic activity. The long-term objective is to implement in the future a fast feedback signal in the reactor control mechanisms to sustain the fusion reaction. The implemented electronic measurement device is anticipated to be deployed in the WEST. A standalone software-defined computation engine was designed to handle data collected at high rates in the server back-end of the system. Signals are obtained from the front-end field-programmable gate array mezzanine cards that acquire and perform a selection from the gas electron multiplier detector. A fast, authorial library for plasma diagnostics was written in C++. It originated from reference offline MATLAB implementations. They were redesigned for runtime analysis during the experiment in the novel online modes of operation. The implementation allowed the benchmarking, evaluation, and optimization of plasma processing algorithms with the possibility to check the consistency with reference computations written in MATLAB. The back-end software and hardware architecture are presented with data evaluation mechanisms. The online modes of operation for the WEST are discussed. The results concerning the performance of the processing and the introduced functionality are presented.

  4. Development and Validation of High Performance Unshrouded Centrifugal Impeller

    NASA Technical Reports Server (NTRS)

    Chen, Wei-Chung; Williams, M.; Paris, John K.; Prueger, G. H.; Williams, Robert; Turner, James E. (Technical Monitor)

    2001-01-01

    The feasibility of using a two-stage unshrouded impeller turbopump to replace the current three-stage reusable launch vehicle engine shrouded impeller hydrogen pump has been evaluated from the standpoint of turbopump weight reduction and overall payload improvement. These advantages are a by-product of the higher tip speeds that an unshrouded impeller can sustain. The issues associated with the effect of unshrouded impeller tip clearance on pump efficiency and head have been evaluated with one-dimensional tools and full three-dimensional rotordynamic fluid reaction forces and coefficients have been established through time dependent computational fluid dynamics (CFD) simulation of the whole 360 degree impeller with different rotor eccentricities and whirling ratios. Unlike the shrouded impeller, the unshrouded impeller forces are evaluated as the sum of the pressure forces on the blade and the pressure forces on the hub using the CFD results. The turbopump axial thrust control has been optimized by adjusting the first stage impeller backend wear ring seal diameter and diverting the second stage backend balance piston flow to the proper location. The structural integrity associated with the high tip speed has been checked by analyzing a 3D-Finite Element Model at maximum design conditions (6% higher than the design speed). This impeller was fabricated and tested in the NASA Marshall Space Flight Center water-test rig. The experimental data will be compared with the analytical predictions and presented in another paper. The experimental data provides validation data for the numerical design and analysis methodology. The validated numerical methodology can be used to help design different unshrouded impeller configurations.

  5. Compiler-Driven Performance Optimization and Tuning for Multicore Architectures

    DTIC Science & Technology

    2015-04-10

    develop a powerful system for auto-tuning of library routines and compute-intensive kernels, driven by the Pluto system for multicores that we are...kernels, driven by the Pluto system for multicores that we are developing. The work here is motivated by recent advances in two major areas of...automatic C-to-CUDA code generator using a polyhedral compiler transformation framework. We have used and adapted PLUTO (our state-of-the-art tool

  6. A ROSE-based OpenMP 3.0 Research Compiler Supporting Multiple Runtime Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D; Panas, T

    2010-01-25

    OpenMP is a popular and evolving programming model for shared-memory platforms. It relies on compilers for optimal performance and to target modern hardware architectures. A variety of extensible and robust research compilers are key to OpenMP's sustainable success in the future. In this paper, we present our efforts to build an OpenMP 3.0 research compiler for C, C++, and Fortran; using the ROSE source-to-source compiler framework. Our goal is to support OpenMP research for ourselves and others. We have extended ROSE's internal representation to handle all of the OpenMP 3.0 constructs and facilitate their manipulation. Since OpenMP research is oftenmore » complicated by the tight coupling of the compiler translations and the runtime system, we present a set of rules to define a common OpenMP runtime library (XOMP) on top of multiple runtime libraries. These rules additionally define how to build a set of translations targeting XOMP. Our work demonstrates how to reuse OpenMP translations across different runtime libraries. This work simplifies OpenMP research by decoupling the problematic dependence between the compiler translations and the runtime libraries. We present an evaluation of our work by demonstrating an analysis tool for OpenMP correctness. We also show how XOMP can be defined using both GOMP and Omni and present comparative performance results against other OpenMP compilers.« less

  7. A UNIX-based real-time data acquisition system for microprobe analysis using an advanced X11 window toolkit

    NASA Astrophysics Data System (ADS)

    Kramer, J. L. A. M.; Ullings, A. H.; Vis, R. D.

    1993-05-01

    A real-time data acquisition system for microprobe analysis has been developed at the Free University of Amsterdam. The system is composed of two parts: a front-end real-time and a back-end monitoring system. The front-end consists of a VMEbus based system which reads out a CAMAC crate. The back-end is implemented on a Sun work station running the UNIX operating system. This separation allows the integration of a minimal, and consequently very fast, real-time executive within the sophisticated possibilities of advanced UNIX work stations.

  8. JANUS: A Compilation System for Balancing Parallelism and Performance in OpenVX

    NASA Astrophysics Data System (ADS)

    Omidian, Hossein; Lemieux, Guy G. F.

    2018-04-01

    Embedded systems typically do not have enough on-chip memory for entire an image buffer. Programming systems like OpenCV operate on entire image frames at each step, making them use excessive memory bandwidth and power. In contrast, the paradigm used by OpenVX is much more efficient; it uses image tiling, and the compilation system is allowed to analyze and optimize the operation sequence, specified as a compute graph, before doing any pixel processing. In this work, we are building a compilation system for OpenVX that can analyze and optimize the compute graph to take advantage of parallel resources in many-core systems or FPGAs. Using a database of prewritten OpenVX kernels, it automatically adjusts the image tile size as well as using kernel duplication and coalescing to meet a defined area (resource) target, or to meet a specified throughput target. This allows a single compute graph to target implementations with a wide range of performance needs or capabilities, e.g. from handheld to datacenter, that use minimal resources and power to reach the performance target.

  9. The Optimization of Automatically Generated Compilers.

    DTIC Science & Technology

    1987-01-01

    than their procedural counterparts, and are also easier to analyze for storage optimizations; (2) AGs can be algorithmically checked to be non-circular...Providing algorithms to move the storage for many attributes from the For structure tree into global stacks and variables. -Dd(2) Creating AEs which build and...54 3.5.2. Partitioning algorithm

  10. HAL/S-FC and HAL/S-360 compiler system program description

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The compiler is a large multi-phase design and can be broken into four phases: Phase 1 inputs the source language and does a syntactic and semantic analysis generating the source listing, a file of instructions in an internal format (HALMAT) and a collection of tables to be used in subsequent phases. Phase 1.5 massages the code produced by Phase 1, performing machine independent optimization. Phase 2 inputs the HALMAT produced by Phase 1 and outputs machine language object modules in a form suitable for the OS-360 or FCOS linkage editor. Phase 3 produces the SDF tables. The four phases described are written in XPL, a language specifically designed for compiler implementation. In addition to the compiler, there is a large library containing all the routines that can be explicitly called by the source language programmer plus a large collection of routines for implementing various facilities of the language.

  11. Tectonic evaluation of the Nubian shield of Northeastern Sudan using thematic mapper imagery

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Bechtel is nearing completion of a one-year program that uses digitally enhanced LANDSAT Thematic Mapper (TM) data to compile the first comprehensive regional tectonic map of the Proterozoic Nubian Shield exposed in the northern Red Sea Hills of northeastern Sudan. The status of significant objectives of this study are given. Pertinent published and unpublished geologic literature and maps of the northern Red Sea Hills to establish the geologic framework of the region were reviewed. Thematic mapper imagery for optimal base-map enhancements was processed. Photo mosaics of enhanced images to serve as base maps for compilation of geologic information were completed. Interpretation of TM imagery to define and delineate structural and lithogologic provinces was completed. Geologic information (petrologic, and radiometric data) was compiled from the literature review onto base-map overlays. Evaluation of the tectonic evolution of the Nubian Shield based on the image interpretation and the compiled tectonic maps is continuing.

  12. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  13. Towards Implementation of a Generalized Architecture for High-Level Quantum Programming Language

    NASA Astrophysics Data System (ADS)

    Ameen, El-Mahdy M.; Ali, Hesham A.; Salem, Mofreh M.; Badawy, Mahmoud

    2017-08-01

    This paper investigates a novel architecture to the problem of quantum computer programming. A generalized architecture for a high-level quantum programming language has been proposed. Therefore, the programming evolution from the complicated quantum-based programming to the high-level quantum independent programming will be achieved. The proposed architecture receives the high-level source code and, automatically transforms it into the equivalent quantum representation. This architecture involves two layers which are the programmer layer and the compilation layer. These layers have been implemented in the state of the art of three main stages; pre-classification, classification, and post-classification stages respectively. The basic building block of each stage has been divided into subsequent phases. Each phase has been implemented to perform the required transformations from one representation to another. A verification process was exposed using a case study to investigate the ability of the compiler to perform all transformation processes. Experimental results showed that the efficacy of the proposed compiler achieves a correspondence correlation coefficient about R ≈ 1 between outputs and the targets. Also, an obvious achievement has been utilized with respect to the consumed time in the optimization process compared to other techniques. In the online optimization process, the consumed time has increased exponentially against the amount of accuracy needed. However, in the proposed offline optimization process has increased gradually.

  14. Scalable Node Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.

    2012-07-30

    Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to bemore » used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.« less

  15. Development of public science archive system of Subaru Telescope

    NASA Astrophysics Data System (ADS)

    Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatochi; Watanabe, Masaru; Okumura, Shin-Ichiro; Ozawa, Tomohiko; Yamamoto, Naotaka; Hamabe, Masaru

    2002-09-01

    We have developed a public science archive system, Subaru-Mitaka-Okayama-Kiso Archive system (SMOKA), as a successor of Mitaka-Okayama-Kiso Archive (MOKA) system. SMOKA provides an access to the public data of Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory of the University of Tokyo. Since 1997, we have tried to compile the dictionary of FITS header keywords. The accomplishment of the dictionary enabled us to construct an unified public archive of the data obtained with various instruments at the telescopes. SMOKA has two kinds of user interfaces; Simple Search and Advanced Search. Novices can search data by simply selecting the name of the target with the Simple Search interface. Experts would prefer to set detailed constraints on the query, using the Advanced Search interface. In order to improve the efficiency of searching, several new features are implemented, such as archive status plots, calibration data search, an annotation system, and an improved Quick Look Image browsing system. We can efficiently develop and operate SMOKA by adopting a three-tier model for the system. Java servlets and Java Server Pages (JSP) are useful to separate the front-end presentation from the middle and back-end tiers.

  16. Database-driven web interface automating gyrokinetic simulations for validation

    NASA Astrophysics Data System (ADS)

    Ernst, D. R.

    2010-11-01

    We are developing a web interface to connect plasma microturbulence simulation codes with experimental data. The website automates the preparation of gyrokinetic simulations utilizing plasma profile and magnetic equilibrium data from TRANSP analysis of experiments, read from MDSPLUS over the internet. This database-driven tool saves user sessions, allowing searches of previous simulations, which can be restored to repeat the same analysis for a new discharge. The website includes a multi-tab, multi-frame, publication quality java plotter Webgraph, developed as part of this project. Input files can be uploaded as templates and edited with context-sensitive help. The website creates inputs for GS2 and GYRO using a well-tested and verified back-end, in use for several years for the GS2 code [D. R. Ernst et al., Phys. Plasmas 11(5) 2637 (2004)]. A centralized web site has the advantage that users receive bug fixes instantaneously, while avoiding the duplicated effort of local compilations. Possible extensions to the database to manage run outputs, toward prototyping for the Fusion Simulation Project, are envisioned. Much of the web development utilized support from the DoE National Undergraduate Fellowship program [e.g., A. Suarez and D. R. Ernst, http://meetings.aps.org/link/BAPS.2005.DPP.GP1.57.

  17. Pythran: enabling static optimization of scientific Python programs

    NASA Astrophysics Data System (ADS)

    Guelton, Serge; Brunet, Pierrick; Amini, Mehdi; Merlini, Adrien; Corbillon, Xavier; Raynaud, Alan

    2015-01-01

    Pythran is an open source static compiler that turns modules written in a subset of Python language into native ones. Assuming that scientific modules do not rely much on the dynamic features of the language, it trades them for powerful, possibly inter-procedural, optimizations. These optimizations include detection of pure functions, temporary allocation removal, constant folding, Numpy ufunc fusion and parallelization, explicit thread-level parallelism through OpenMP annotations, false variable polymorphism pruning, and automatic vector instruction generation such as AVX or SSE. In addition to these compilation steps, Pythran provides a C++ runtime library that leverages the C++ STL to provide generic containers, and the Numeric Template Toolbox for Numpy support. It takes advantage of modern C++11 features such as variadic templates, type inference, move semantics and perfect forwarding, as well as classical idioms such as expression templates. Unlike the Cython approach, Pythran input code remains compatible with the Python interpreter. Output code is generally as efficient as the annotated Cython equivalent, if not more, but without the backward compatibility loss.

  18. Computer Language For Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.; Lucas, Stephen H.

    1991-01-01

    SOL is computer language geared to solution of design problems. Includes mathematical modeling and logical capabilities of computer language like FORTRAN; also includes additional power of nonlinear mathematical programming methods at language level. SOL compiler takes SOL-language statements and generates equivalent FORTRAN code and system calls. Provides syntactic and semantic checking for recovery from errors and provides detailed reports containing cross-references to show where each variable used. Implemented on VAX/VMS computer systems. Requires VAX FORTRAN compiler to produce executable program.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Lawrence

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and themore » environment relieving developers from implementing such details.« less

  20. The JANA calibrations and conditions database API

    NASA Astrophysics Data System (ADS)

    Lawrence, David

    2010-04-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  1. Finite element analysis of pectus carinatum surgical correction via a minimally invasive approach.

    PubMed

    Neves, Sara C; Pinho, A C M; Fonseca, Jaime C; Rodrigues, Nuno F; Henriques-Coelho, Tiago; Correia-Pinto, Jorge; Vilaça, João L

    2015-01-01

    Pectus carinatum (PC) is a chest deformity caused by a disproportionate growth of the costal cartilages compared to the bony thoracic skeleton, pulling the sternum towards, which leads to its protrusion. There has been a growing interest on using the 'reversed Nuss' technique as a minimally invasive procedure for PC surgical correction. A corrective bar is introduced between the skin and the thoracic cage and positioned on top of the sternum highest protrusion area for continuous pressure. Then, it is fixed to the ribs and kept implanted for about 2-3 years. The purpose of this work was to (a) assess the stresses distribution on the thoracic cage that arise from the procedure, and (b) investigate the impact of different positioning of the corrective bar along the sternum. The higher stresses were generated on the 4th, 5th and 6th ribs backend, supporting the hypothesis of pectus deformities correction-induced scoliosis. The different bar positioning originated different stresses on the ribs' backend. The bar position that led to lower stresses generated on the ribs backend was the one that also led to the smallest sternum displacement. However, this may be preferred, as the risk of induced scoliosis is lowered.

  2. Designing and Developing a NASA Research Projects Knowledge Base and Implementing Knowledge Management and Discovery Techniques

    NASA Astrophysics Data System (ADS)

    Dabiru, L.; O'Hara, C. G.; Shaw, D.; Katragadda, S.; Anderson, D.; Kim, S.; Shrestha, B.; Aanstoos, J.; Frisbie, T.; Policelli, F.; Keblawi, N.

    2006-12-01

    The Research Project Knowledge Base (RPKB) is currently being designed and will be implemented in a manner that is fully compatible and interoperable with enterprise architecture tools developed to support NASA's Applied Sciences Program. Through user needs assessment, collaboration with Stennis Space Center, Goddard Space Flight Center, and NASA's DEVELOP Staff personnel insight to information needs for the RPKB were gathered from across NASA scientific communities of practice. To enable efficient, consistent, standard, structured, and managed data entry and research results compilation a prototype RPKB has been designed and fully integrated with the existing NASA Earth Science Systems Components database. The RPKB will compile research project and keyword information of relevance to the six major science focus areas, 12 national applications, and the Global Change Master Directory (GCMD). The RPKB will include information about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry. The purpose of this project is to intelligently harvest the results of research sponsored by the NASA Applied Sciences Program and related research program results. We present various approaches for a wide spectrum of knowledge discovery of research results, publications, projects, etc. from the NASA Systems Components database and global information systems and show how this is implemented in SQL Server database. The application of knowledge discovery is useful for intelligent query answering and multiple-layered database construction. Using advanced EA tools such as the Earth Science Architecture Tool (ESAT), RPKB will enable NASA and partner agencies to efficiently identify the significant results for new experiment directions and principle investigators to formulate experiment directions for new proposals.

  3. START: a system for flexible analysis of hundreds of genomic signal tracks in few lines of SQL-like queries.

    PubMed

    Zhu, Xinjie; Zhang, Qiang; Ho, Eric Dun; Yu, Ken Hung-On; Liu, Chris; Huang, Tim H; Cheng, Alfred Sze-Lok; Kao, Ben; Lo, Eric; Yip, Kevin Y

    2017-09-22

    A genomic signal track is a set of genomic intervals associated with values of various types, such as measurements from high-throughput experiments. Analysis of signal tracks requires complex computational methods, which often make the analysts focus too much on the detailed computational steps rather than on their biological questions. Here we propose Signal Track Query Language (STQL) for simple analysis of signal tracks. It is a Structured Query Language (SQL)-like declarative language, which means one only specifies what computations need to be done but not how these computations are to be carried out. STQL provides a rich set of constructs for manipulating genomic intervals and their values. To run STQL queries, we have developed the Signal Track Analytical Research Tool (START, http://yiplab.cse.cuhk.edu.hk/start/ ), a system that includes a Web-based user interface and a back-end execution system. The user interface helps users select data from our database of around 10,000 commonly-used public signal tracks, manage their own tracks, and construct, store and share STQL queries. The back-end system automatically translates STQL queries into optimized low-level programs and runs them on a computer cluster in parallel. We use STQL to perform 14 representative analytical tasks. By repeating these analyses using bedtools, Galaxy and custom Python scripts, we show that the STQL solution is usually the simplest, and the parallel execution achieves significant speed-up with large data files. Finally, we describe how a biologist with minimal formal training in computer programming self-learned STQL to analyze DNA methylation data we produced from 60 pairs of hepatocellular carcinoma (HCC) samples. Overall, STQL and START provide a generic way for analyzing a large number of genomic signal tracks in parallel easily.

  4. KMCLib: A general framework for lattice kinetic Monte Carlo (KMC) simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2014-09-01

    KMCLib is a general framework for lattice kinetic Monte Carlo (KMC) simulations. The program can handle simulations of the diffusion and reaction of millions of particles in one, two, or three dimensions, and is designed to be easily extended and customized by the user to allow for the development of complex custom KMC models for specific systems without having to modify the core functionality of the program. Analysis modules and on-the-fly elementary step diffusion rate calculations can be implemented as plugins following a well-defined API. The plugin modules are loosely coupled to the core KMCLib program via the Python scripting language. KMCLib is written as a Python module with a backend C++ library. After initial compilation of the backend library KMCLib is used as a Python module; input to the program is given as a Python script executed using a standard Python interpreter. We give a detailed description of the features and implementation of the code and demonstrate its scaling behavior and parallel performance with a simple one-dimensional A-B-C lattice KMC model and a more complex three-dimensional lattice KMC model of oxygen-vacancy diffusion in a fluorite structured metal oxide. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Catalogue identifier: AESZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 49 064 No. of bytes in distributed program, including test data, etc.: 1 575 172 Distribution format: tar.gz Programming language: Python and C++. Computer: Any computer that can run a C++ compiler and a Python interpreter. Operating system: Tested on Ubuntu 12.4 LTS, CentOS release 5.9, Mac OSX 10.5.8 and Mac OSX 10.8.2, but should run on any system that can have a C++ compiler, MPI and a Python interpreter. Has the code been vectorized or parallelized?: Yes. From one to hundreds of processors depending on the type of input and simulation. RAM: From a few megabytes to several gigabytes depending on input parameters and the size of the system to simulate. Classification: 4.13, 16.13. External routines: KMCLib uses an external Mersenne Twister pseudo random number generator that is included in the code. A Python 2.7 interpreter and a standard C++ runtime library are needed to run the serial version of the code. For running the parallel version an MPI implementation is needed, such as e.g. MPICH from http://www.mpich.org or Open-MPI from http://www.open-mpi.org. SWIG (obtainable from http://www.swig.org/) and CMake (obtainable from http://www.cmake.org/) are needed for building the backend module, Sphinx (obtainable from http://sphinx-doc.org) for building the documentation and CPPUNIT (obtainable from http://sourceforge.net/projects/cppunit/) for building the C++ unit tests. Nature of problem: Atomic scale simulation of slowly evolving dynamics is a great challenge in many areas of computational materials science and catalysis. When the rare-events dynamics of interest is orders of magnitude slower than the typical atomic vibrational frequencies a straight-forward propagation of the equations of motions for the particles in the simulation cannot reach time scales of relevance for modeling the slow dynamics. Solution method: KMCLib provides an implementation of the kinetic Monte Carlo (KMC) method that solves the slow dynamics problem by utilizing the separation of time scales between fast vibrational motion and the slowly evolving rare-events dynamics. Only the latter is treated explicitly and the system is simulated as jumping between fully equilibrated local energy minima on the slow-dynamics potential energy surface. Restrictions: KMCLib implements the lattice KMC method and is as such restricted to geometries that can be expressed on a grid in space. Unusual features: KMCLib has been designed to be easily customized, to allow for user-defined functionality and integration with other codes. The user can define her own on-the-fly rate calculator via a Python API, so that site-specific elementary process rates, or rates depending on long-range interactions or complex geometrical features can easily be included. KMCLib also allows for on-the-fly analysis with user-defined analysis modules. KMCLib can keep track of individual particle movements and includes tools for mean square displacement analysis, and is therefore particularly well suited for studying diffusion processes at surfaces and in solids. Additional comments: The full documentation of the program is distributed with the code and can also be found at http://www.github.com/leetmaa/KMCLib/manual Running time: rom a few seconds to several days depending on the type of simulation and input parameters.

  5. The preliminary SOL (Sizing and Optimization Language) reference manual

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1989-01-01

    The Sizing and Optimization Language, SOL, a high-level special-purpose computer language has been developed to expedite application of numerical optimization to design problems and to make the process less error-prone. This document is a reference manual for those wishing to write SOL programs. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler and runtime library routines. An overview of SOL appears in NASA TM 100565.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bales, Benjamin B; Barrett, Richard F

    In almost all modern scientific applications, developers achieve the greatest performance gains by tuning algorithms, communication systems, and memory access patterns, while leaving low level instruction optimizations to the compiler. Given the increasingly varied and complicated x86 architectures, the value of these optimizations is unclear, and, due to time and complexity constraints, it is difficult for many programmers to experiment with them. In this report we explore the potential gains of these 'last mile' optimization efforts on an AMD Barcelona processor, providing readers with relevant information so that they can decide whether investment in the presented optimizations is worthwhile.

  7. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  8. Combining constraint satisfaction and local improvement algorithms to construct anaesthetists' rotas

    NASA Technical Reports Server (NTRS)

    Smith, Barbara M.; Bennett, Sean

    1992-01-01

    A system is described which was built to compile weekly rotas for the anaesthetists in a large hospital. The rota compilation problem is an optimization problem (the number of tasks which cannot be assigned to an anaesthetist must be minimized) and was formulated as a constraint satisfaction problem (CSP). The forward checking algorithm is used to find a feasible rota, but because of the size of the problem, it cannot find an optimal (or even a good enough) solution in an acceptable time. Instead, an algorithm was devised which makes local improvements to a feasible solution. The algorithm makes use of the constraints as expressed in the CSP to ensure that feasibility is maintained, and produces very good rotas which are being used by the hospital involved in the project. It is argued that formulation as a constraint satisfaction problem may be a good approach to solving discrete optimization problems, even if the resulting CSP is too large to be solved exactly in an acceptable time. A CSP algorithm may be able to produce a feasible solution which can then be improved, giving a good, if not provably optimal, solution.

  9. Further developments in generating type-safe messaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neswold, R.; King, C.; /Fermilab

    2011-11-01

    At ICALEPCS 09, we introduced a source code generator that allows processes to communicate safely using data types native to each host language. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including the addition of three more client languages, an optimization in network packet size and the addition of a new protocol data type. The protocol compiler is continuing to prove itself as an easy and robust way to get applications written in different languages hosted on different computer architectures to communicate. We have two active Erlang projects that are using themore » protocol compiler to access ACNET data at high data rates. We also used the protocol compiler output to deliver ACNET data to an iPhone/iPad application. Since it takes an average of two weeks to support a new language, we're willing to expand the protocol compiler to support new languages that our community uses.« less

  10. Web-Accessible Scientific Workflow System for Performance Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roelof Versteeg; Roelof Versteeg; Trevor Rowe

    2006-03-01

    We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less

  11. 15 pixels digital autocorrelation spectrometer system

    NASA Astrophysics Data System (ADS)

    Lee, Changhoon; Kim, Hyo-Ryung; Kim, Kwang-Dong; Chung, Mun-Hee; Timoc, C.

    2006-06-01

    In this paper describes the system configuration and the some performance test results of the 15 pixels digital autocorrelation spectrometer to be used at the Taeduk Radio Astronomy Observatory (TRAO) of Korea. This autocorrelation spectrometer instrument enclosed in a 3-slot VXI module and controlled via a USB port by a backend PC. This spectrometer system consists of the 4 band-pass filters unit, the digitizer, the 512 lags correlator, the clock distribution unit, and USB controller. And here we describe the frequency accuracy and the root-mean-square noise characteristic of this spectrometer. After some calibration procedure, this spectrometer can be use as the back-end system at TRAO for the 3x5 focal plane array receivers.

  12. Multi-baseline bootstrapping at the Navy precision optical interferometer

    NASA Astrophysics Data System (ADS)

    Armstrong, J. T.; Schmitt, H. R.; Mozurkewich, D.; Jorgensen, A. M.; Muterspaugh, M. W.; Baines, E. K.; Benson, J. A.; Zavala, Robert T.; Hutter, D. J.

    2014-07-01

    The Navy Precision Optical Interferometer (NPOI) was designed from the beginning to support baseline boot- strapping with equally-spaced array elements. The motivation was the desire to image the surfaces of resolved stars with the maximum resolution possible with a six-element array. Bootstrapping two baselines together to track fringes on a third baseline has been used at the NPOI for many years, but the capabilities of the fringe tracking software did not permit us to bootstrap three or more baselines together. Recently, both a new backend (VISION; Tennessee State Univ.) and new hardware and firmware (AZ Embedded Systems and New Mexico Tech, respectively) for the current hybrid backend have made multi-baseline bootstrapping possible.

  13. Server-Controlled Identity-Based Authenticated Key Exchange

    NASA Astrophysics Data System (ADS)

    Guo, Hua; Mu, Yi; Zhang, Xiyong; Li, Zhoujun

    We present a threshold identity-based authenticated key exchange protocol that can be applied to an authenticated server-controlled gateway-user key exchange. The objective is to allow a user and a gateway to establish a shared session key with the permission of the back-end servers, while the back-end servers cannot obtain any information about the established session key. Our protocol has potential applications in strong access control of confidential resources. In particular, our protocol possesses the semantic security and demonstrates several highly-desirable security properties such as key privacy and transparency. We prove the security of the protocol based on the Bilinear Diffie-Hellman assumption in the random oracle model.

  14. The High Time Resolution Universe

    NASA Astrophysics Data System (ADS)

    Bailes, Matthew; Possenti, Andrea; Johnston, Simon; Kramer, Michael; Burgay, Marta; Bhat, Ramesh; Keith, Michael; Burke-Spolaor, Sarah; van Straten, Willem; Stappers, Benjamin; Bates, Samuel

    2008-04-01

    The Parkes multibeam surveys heralded a new era in pulsar surveys, more than doubling the number of pulsars known. However, at high time resolution, they were severely limited by the analogue backend system, which limited the volume of sky they could effectively survey to just the local 2-3 kpc. Here we propose to use a new digital backend coupled with Australia's most powerful (16 Tflop) supercomputing cluster to conduct three ambitious surveys for millisecond and relativistic pulsars with the Parkes telescope. We hope to discover over 200 new millisecond and relativistic pulsars that will define the recycled pulsar period distribution, supply pulsars for the timing array and aid in our understanding of binary evolution.

  15. Compiling for Application Specific Computational Acceleration in Reconfigurable Architectures Final Report CRADA No. TSB-2033-01

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Supinski, B.; Caliga, D.

    2017-09-28

    The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.

  16. Lean and Efficient Software: Whole-Program Optimization of Executables

    DTIC Science & Technology

    2015-09-30

    libraries. Many levels of library interfaces—where some libraries are dynamically linked and some are provided in binary form only—significantly limit...software at build time. The opportunity: Our objective in this project is to substantially improve the performance, size, and robustness of binary ...executables by using static and dynamic binary program analysis techniques to perform whole-program optimization directly on compiled programs

  17. Explicit time integration of finite element models on a vectorized, concurrent computer with shared memory

    NASA Technical Reports Server (NTRS)

    Gilbertsen, Noreen D.; Belytschko, Ted

    1990-01-01

    The implementation of a nonlinear explicit program on a vectorized, concurrent computer with shared memory is described and studied. The conflict between vectorization and concurrency is described and some guidelines are given for optimal block sizes. Several example problems are summarized to illustrate the types of speed-ups which can be achieved by reprogramming as compared to compiler optimization.

  18. SETIBURST: A Robotic, Commensal, Realtime Multi-science Backend for the Arecibo Telescope

    NASA Astrophysics Data System (ADS)

    Chennamangalam, Jayanth; MacMahon, David; Cobb, Jeff; Karastergiou, Aris; Siemion, Andrew P. V.; Rajwade, Kaustubh; Armour, Wes; Gajjar, Vishal; Lorimer, Duncan R.; McLaughlin, Maura A.; Werthimer, Dan; Williams, Christopher

    2017-02-01

    Radio astronomy has traditionally depended on observatories allocating time to observers for exclusive use of their telescopes. The disadvantage of this scheme is that the data thus collected is rarely used for other astronomy applications, and in many cases, is unsuitable. For example, properly calibrated pulsar search data can, with some reduction, be used for spectral line surveys. A backend that supports plugging in multiple applications to a telescope to perform commensal data analysis will vastly increase the science throughput of the facility. In this paper, we present “SETIBURST,” a robotic, commensal, realtime multi-science backend for the 305 m Arecibo Telescope. The system uses the 1.4 GHz, seven-beam Arecibo L-band Feed Array (ALFA) receiver whenever it is operated. SETIBURST currently supports two applications: SERENDIP VI, a SETI spectrometer that is conducting a search for signs of technological life, and ALFABURST, a fast transient search system that is conducting a survey of fast radio bursts (FRBs). Based on the FRB event rate and the expected usage of ALFA, we expect 0-5 FRB detections over the coming year. SETIBURST also provides the option of plugging in more applications. We outline the motivation for our instrumentation scheme and the scientific motivation of the two surveys, along with their descriptions and related discussions.

  19. Optimization strategies for molecular dynamics programs on Cray computers and scalar work stations

    NASA Astrophysics Data System (ADS)

    Unekis, Michael J.; Rice, Betsy M.

    1994-12-01

    We present results of timing runs and different optimization strategies for a prototype molecular dynamics program that simulates shock waves in a two-dimensional (2-D) model of a reactive energetic solid. The performance of the program may be improved substantially by simple changes to the Fortran or by employing various vendor-supplied compiler optimizations. The optimum strategy varies among the machines used and will vary depending upon the details of the program. The effect of various compiler options and vendor-supplied subroutine calls is demonstrated. Comparison is made between two scalar workstations (IBM RS/6000 Model 370 and Model 530) and several Cray supercomputers (X-MP/48, Y-MP8/128, and C-90/16256). We find that for a scientific application program dominated by sequential, scalar statements, a relatively inexpensive high-end work station such as the IBM RS/60006 RISC series will outperform single processor performance of the Cray X-MP/48 and perform competitively with single processor performance of the Y-MP8/128 and C-9O/16256.

  20. Extending R packages to support 64-bit compiled code: An illustration with spam64 and GIMMS NDVI3g data

    NASA Astrophysics Data System (ADS)

    Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard

    2017-07-01

    Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.

  1. Location based chat application for iPhone

    NASA Astrophysics Data System (ADS)

    Rana, Pradeep

    With the increasing use of mobile devices everywhere in the world, there is a lack of social interaction between people. The objective of this thesis project is to create a location based chat application, which will help users to interact with other people around them. It will provide an opportunity to meet people when someone visits a new place. The app will use GPS coordinates of the user and will show him a list of other users based on his location. The user can then choose any of the other users from the list and start chatting with them. This app will consist of a frontend and backend. The frontend will be an iOS application and the backend will be a PHP/MYSQL server.

  2. An Optimizing Compiler for Petascale I/O on Leadership Class Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Kandemir, Mahmut

    In high-performance computing systems, parallel I/O architectures usually have very complex hierarchies with multiple layers that collectively constitute an I/O stack, including high-level I/O libraries such as PnetCDF and HDF5, I/O middleware such as MPI-IO, and parallel file systems such as PVFS and Lustre. Our project explored automated instrumentation and compiler support for I/O intensive applications. Our project made significant progress towards understanding the complex I/O hierarchies of high-performance storage systems (including storage caches, HDDs, and SSDs), and designing and implementing state-of-the-art compiler/runtime system technology that targets I/O intensive HPC applications that target leadership class machine. This final report summarizesmore » the major achievements of the project and also points out promising future directions.« less

  3. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum

  4. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  5. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray W

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved heremore » through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.« less

  6. LIBVERSIONINGCOMPILER: An easy-to-use library for dynamic generation and invocation of multiple code versions

    NASA Astrophysics Data System (ADS)

    Cherubin, S.; Agosta, G.

    2018-01-01

    We present LIBVERSIONINGCOMPILER, a C++ library designed to support the dynamic generation of multiple versions of the same compute kernel in a HPC scenario. It can be used to provide continuous optimization, code specialization based on the input data or on workload changes, or otherwise to dynamically adjust the application, without the burden of a full dynamic compiler. The library supports multiple underlying compilers but specifically targets the LLVM framework. We also provide examples of use, showing the overhead of the library, and providing guidelines for its efficient use.

  7. Ada Compiler Validation Summary Report. Certificate Number: 920918S1. 11273 U.S. Navy, Ada/M, Version 4.5 /OPTIMIZE) VAX 8550/8600/8650 (Cluster) = VHSIC Processor Module (VPM) AN/AYK-14 (Bare Board)

    DTIC Science & Technology

    1992-10-27

    Module (VPM) AN/AYK-14 (Bare Board) (target), 920918S1.11273 6. AUTHOR(S) National Institute of Standards and Technology Gaithersburg, MD USA 7 ...Validation Procedures (Pro90] against the Ada Standard (Ada83] using the current Ada Compiler Validation Capability (ACVC). This Validation Summary Report ( VSR ...l..V-20 => ’ $MAXLENINTBASEDLITERAL "-Ŗ:" & (l..V-5 1> 𔃺’) & ൓:" $MAXLENREALBASEDLITERAL ൘:" & (i..V- 7 => 𔃺’) & "F.E:" $MAXSTRINGLITERAL

  8. Supercomputer optimizations for stochastic optimal control applications

    NASA Technical Reports Server (NTRS)

    Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang

    1991-01-01

    Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.

  9. Read buffer optimizations to support compiler-assisted multiple instruction retry

    NASA Technical Reports Server (NTRS)

    Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.

    1993-01-01

    Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.

  10. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  11. Adaptive Environment for Supercompiling with Optimized Parallelism (AESOP)

    DTIC Science & Technology

    2011-09-01

    DATES COVERED (From - To) September 2011 Final 09 March 2009 – 31 July 2011 4 . TITLE AND SUBTITLE ADAPTIVE ENVIRONMENT FOR SUPERCOMPILING WITH... 4 2.1 System characterization loop...Integration Points for AESOP .......................................................................................10 4 . LLVM and the AESOP Compiler

  12. Compilation of International Regulatory Guidance Documents for Neuropathology Assessment during Nonclinical Toxicity Studies

    EPA Science Inventory

    Neuropathology analysis as an endpoint during nonclinical efficacy and toxicity studies is a challenging prospect that requires trained personnel and particular equipment to achieve optimal results. Accordingly, many regulatory agencies have produced explicit guidelines for desig...

  13. SETIBURST: A Robotic, Commensal, Realtime Multi-science Backend for the Arecibo Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennamangalam, Jayanth; Karastergiou, Aris; Williams, Christopher

    Radio astronomy has traditionally depended on observatories allocating time to observers for exclusive use of their telescopes. The disadvantage of this scheme is that the data thus collected is rarely used for other astronomy applications, and in many cases, is unsuitable. For example, properly calibrated pulsar search data can, with some reduction, be used for spectral line surveys. A backend that supports plugging in multiple applications to a telescope to perform commensal data analysis will vastly increase the science throughput of the facility. In this paper, we present “SETIBURST,” a robotic, commensal, realtime multi-science backend for the 305 m Arecibomore » Telescope. The system uses the 1.4 GHz, seven-beam Arecibo L -band Feed Array (ALFA) receiver whenever it is operated. SETIBURST currently supports two applications: SERENDIP VI, a SETI spectrometer that is conducting a search for signs of technological life, and ALFABURST, a fast transient search system that is conducting a survey of fast radio bursts (FRBs). Based on the FRB event rate and the expected usage of ALFA, we expect 0–5 FRB detections over the coming year. SETIBURST also provides the option of plugging in more applications. We outline the motivation for our instrumentation scheme and the scientific motivation of the two surveys, along with their descriptions and related discussions.« less

  14. Team ViGIR

    DTIC Science & Technology

    2015-10-01

    to improving the capabilities of humanitarian rescue robotics. 15. SUBJECT TERMS Robotics, Mobility , Platform Dexterity, Supervised Autonomy...38 3.2.3.1. Planning Backend ...55 4.1.6. Build and Test Infrastructure

  15. FleCSI: Connection to Legion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergen, Benjamin Karl

    2016-08-03

    These are slides which are part of the ASC L2 Milestone Review. The following topics are covered: Legion Backend, Distributed-Memory Partitioning, Sparse Data Representations, and MPI-Legion Interoperability.

  16. Timing characterization and analysis of the Linux-based, closed loop control computer for the Subaru Telescope laser guide star adaptive optics system

    NASA Astrophysics Data System (ADS)

    Dinkins, Matthew; Colley, Stephen

    2008-07-01

    Hardware and software specialized for real time control reduce the timing jitter of executables when compared to off-the-shelf hardware and software. However, these specialized environments are costly in both money and development time. While conventional systems have a cost advantage, the jitter in these systems is much larger and potentially problematic. This study analyzes the timing characterstics of a standard Dell server running a fully featured Linux operating system to determine if such a system would be capable of meeting the timing requirements for closed loop operations. Investigations are preformed on the effectiveness of tools designed to make off-the-shelf system performance closer to specialized real time systems. The Gnu Compiler Collection (gcc) is compared to the Intel C Compiler (icc), compiler optimizations are investigated, and real-time extensions to Linux are evaluated.

  17. Interferometric direction finding with a metamaterial detector

    NASA Astrophysics Data System (ADS)

    Venkatesh, Suresh; Shrekenhamer, David; Xu, Wangren; Sonkusale, Sameer; Padilla, Willie; Schurig, David

    2013-12-01

    We present measurements and analysis demonstrating useful direction finding of sources in the S band (2-4 GHz) using a metamaterial detector. An augmented metamaterial absorber that supports magnitude and phase measurement of the incident electric field, within each unit cell, is described. The metamaterial is implemented in a commercial printed circuit board process with off-board back-end electronics. We also discuss on-board back-end implementation strategies. Direction finding performance is analyzed for the fabricated metamaterial detector using simulated data and the standard algorithm, MUtiple SIgnal Classification. The performance of this complete system is characterized by its angular resolution as a function of radiation density at the detector. Sources with power outputs typical of mobile communication devices can be resolved at kilometer distances with sub-degree resolution and high frame rates.

  18. Developments of FPGA-based digital back-ends for low frequency antenna arrays at Medicina radio telescopes

    NASA Astrophysics Data System (ADS)

    Naldi, G.; Bartolini, M.; Mattana, A.; Pupillo, G.; Hickish, J.; Foster, G.; Bianchi, G.; Lingua, A.; Monari, J.; Montebugnoli, S.; Perini, F.; Rusticelli, S.; Schiaffino, M.; Virone, G.; Zarb Adami, K.

    In radio astronomy Field Programmable Gate Array (FPGA) technology is largely used for the implementation of digital signal processing techniques applied to antenna arrays. This is mainly due to the good trade-off among computing resources, power consumption and cost offered by FPGA chip compared to other technologies like ASIC, GPU and CPU. In the last years several digital backend systems based on such devices have been developed at the Medicina radio astronomical station (INAF-IRA, Bologna, Italy). Instruments like FX correlator, direct imager, beamformer, multi-beam system have been successfully designed and realized on CASPER (Collaboration for Astronomy Signal Processing and Electronics Research, https://casper.berkeley.edu) processing boards. In this paper we present the gained experience in this kind of applications.

  19. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  20. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates - as reported by a cache simulation tool, and confirmed by hardware counters - only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  1. On the Efficacy of Source Code Optimizations for Cache-Based Systems

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Saphir, William C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Obtaining high performance without machine-specific tuning is an important goal of scientific application programmers. Since most scientific processing is done on commodity microprocessors with hierarchical memory systems, this goal of "portable performance" can be achieved if a common set of optimization principles is effective for all such systems. It is widely believed, or at least hoped, that portable performance can be realized. The rule of thumb for optimization on hierarchical memory systems is to maximize temporal and spatial locality of memory references by reusing data and minimizing memory access stride. We investigate the effects of a number of optimizations on the performance of three related kernels taken from a computational fluid dynamics application. Timing the kernels on a range of processors, we observe an inconsistent and often counterintuitive impact of the optimizations on performance. In particular, code variations that have a positive impact on one architecture can have a negative impact on another, and variations expected to be unimportant can produce large effects. Moreover, we find that cache miss rates-as reported by a cache simulation tool, and confirmed by hardware counters-only partially explain the results. By contrast, the compiler-generated assembly code provides more insight by revealing the importance of processor-specific instructions and of compiler maturity, both of which strongly, and sometimes unexpectedly, influence performance. We conclude that it is difficult to obtain performance portability on modern cache-based computers, and comment on the implications of this result.

  2. Modular Expression Language for Ordinary Differential Equation Editing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blake, Robert C.

    MELODEEis a system for describing systems of initial value problem ordinary differential equations, and a compiler for the language that produces optimized code to integrate the differential equations. Features include rational polynomial approximation for expensive functions and automatic differentiation for symbolic jacobians

  3. Haystack Observatory Technology Development Center

    NASA Technical Reports Server (NTRS)

    Beaudoin, Chris; Corey, Brian; Niell, Arthur; Cappallo, Roger; Whitney, Alan

    2013-01-01

    Technology development at MIT Haystack Observatory were focused on four areas in 2012: VGOS developments at GGAO; Digital backend developments and workshop; RFI compatibility at VLBI stations; Mark 6 VLBI data system development.

  4. Magnetocaloric Materials and the Optimization of Cooling Power Density

    NASA Technical Reports Server (NTRS)

    Wikus, Patrick; Canavan, Edgar; Heine, Sarah Trowbridge; Matsumoto, Koichi; Numazawa, Takenori

    2014-01-01

    The magnetocaloric effect is the thermal response of a material to an external magnetic field. This manuscript focuses on the physics and the properties of materials which are commonly used for magnetic refrigeration at cryogenic temperatures. After a brief overview of the magnetocaloric effect and associated thermodynamics, typical requirements on refrigerants are discussed from a standpoint of cooling power density optimization. Finally, a compilation of the most important properties of several common magnetocaloric materials is presented.

  5. Scout: high-performance heterogeneous computing made simple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablin, James; Mc Cormick, Patrick; Herlihy, Maurice

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focusmore » on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.« less

  6. Ada Compiler Validation Summary Report. Certificate Number: 920918S1. 11272, U.S. Navy Ada/M, Version 4.5 (/OPTIMIZE) VAX 8550/8600/8650 (Cluster) Enhanced Processor (EP) AN/UYK-44 (Bare Board)

    DTIC Science & Technology

    1992-09-01

    and Technology Gaithersburg, MD DI USA ELECTE _993_ _ _ _ 7 . PERFORMING ORGANIZATION NAME(S) AND ADDRESS(E JUN 3 1993 8. PERFORMING ORGANIZATION...current Ada Compiler Validation Capability (ACVC). This Validation Summary Report ( VSR ) gives an account of the testing of this Ada implementation. For...34 $MAXLENREALBASEDLITERAL ൘:" & (1..V- 7 => 𔃺’) & "F.E:" SMAXSTRINGLITERAL "’ & (1..V-2 => ’A’) & ’ A-1 The following table contains the values for the remaining macro

  7. Ada Compiler Validation Summary Report: Certificate Number: 910626S1. 11174 U.S. Navy, Ada/M, Version 4.0 (/Optimize), VAX 8550, Running VAX/VMS version 5.3 (Host) to AN/UYK-44 (EMR) (Bare Board) (Target).

    DTIC Science & Technology

    1991-07-30

    Gaithersburg, MD USA 7 PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION National Institute of Standards and Technology REPORT...Ada Compiler Validation Capability (ACVC). This Validation Summary Report ( VSR ) gives an account of the testing of this Ada implementation. For any... 7 => 𔃺’) & "F.E:" $MAXSTRINGLITERAL ’"’ & (1..V-2 => ’A’) & ’"’ A-i The fo~te1-wing table contains the values for the remaining macro parameters

  8. The ATLAS Software Installation System v2: a highly available system to install and validate Grid and Cloud sites via Panda

    NASA Astrophysics Data System (ADS)

    De Salvo, A.; Kataoka, M.; Sanchez Pineda, A.; Smirnov, Y.

    2015-12-01

    The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original Workload Management Service (WMS) and the new PanDA modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been optimized to allow a full High-Availability (HA) solution over Wide Area Network. The servlets, running on each frontend, have been also decoupled from local settings, to allow an easy scalability of the system, including the possibility of an HA system with multiple sites. The clients can also be run in multiple copies and in different geographical locations, and take care of sending the installation and validation jobs to the target Grid or Cloud sites. Moreover, the Installation Database is used as source of parameters by the automatic agents running in CVMFS, in order to install the software and distribute it to the sites. The system is in production for ATLAS since 2013, having as main sites in HA the INFN Roma Tier 2 and the CERN Agile Infrastructure. The Light Job Submission Framework for Installation (LJSFi) v2 engine is directly interfacing with PanDA for the Job Management, the Atlas Grid Information System (AGIS) for the site parameter configurations, and CVMFS for both core components and the installation of the software itself. LJSFi2 is also able to use other plugins, and is essentially Virtual Organization (VO) agnostic, so can be directly used and extended to cope with the requirements of any Grid or Cloud enabled VO. In this work we will present the architecture, performance, status and possible evolutions to the system for the LHC Run2 and beyond.

  9. 40 CFR 63.499 - Back-end process provisions-reporting.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... design (i.e., steam-assisted, air-assisted, or non-assisted); all visible emission readings, heat content... specify appropriate reporting and recordkeeping requirements as part of the review of the Precompliance...

  10. 40 CFR 63.499 - Back-end process provisions-reporting.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... design (i.e., steam-assisted, air-assisted, or non-assisted); all visible emission readings, heat content... specify appropriate reporting and recordkeeping requirements as part of the review of the Precompliance...

  11. 40 CFR 63.499 - Back-end process provisions-reporting.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... design (i.e., steam-assisted, air-assisted, or non-assisted); all visible emission readings, heat content... specify appropriate reporting and recordkeeping requirements as part of the review of the Precompliance...

  12. 40 CFR 63.499 - Back-end process provisions-reporting.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... design (i.e., steam-assisted, air-assisted, or non-assisted); all visible emission readings, heat content... specify appropriate reporting and recordkeeping requirements as part of the review of the Precompliance...

  13. Putting FLEXPART to REST: The Provision of Atmospheric Transport Modeling Services

    NASA Astrophysics Data System (ADS)

    Morton, Don; Arnold, Dèlia

    2015-04-01

    We are developing a RESTful set of modeling services for the FLEXPART modeling system. FLEXPART (FLEXible PARTicle dispersion model) is a Lagrangian transport and dispersion model used by a growing international community. It has been used to simulate and forecast the atmospheric transport of wildfire smoke, volcanic ash and radionuclides and may be run in backwards mode to provide information for the determination of emission sources such as nuclear emissions and greenhouse gases. This open source software is distributed in source code form, and has several compiler and library dependencies that users need to address. Although well-documented, getting it compiled, set up, running, and post-processed is often tedious, making it difficult for the inexperienced or casual user. Well-designed modeling services lower the entry barrier for scientists to perform simulations, allowing them to create and execute their models from a variety of devices and programming environments. This world of Service Oriented Architectures (SOA) has progressed to a REpresentational State Transfer (REST) paradigm, in which the pervasive and mature HTTP environment is used as a foundation for providing access to model services. With such an approach, sound software engineering practises are adhered to in order to deploy service modules exhibiting very loose coupling with the clients. In short, services are accessed and controlled through the formation of properly-constructed Uniform Resource Identifiers (URI's), processed in an HTTP environment. In this way, any client or combination of clients - whether a bash script, Python program, web GUI, or even Unix command line - that can interact with an HTTP server, can run the modeling environment. This loose coupling allows for the deployment of a variety of front ends, all accessing a common modeling backend system. Furthermore, it is generally accepted in the cloud computing community that RESTful approaches constitute a sound approach towards successful deployment of services. Through the design of a RESTful, cloud-based modeling system, we provide the ubiquitous access to FLEXPART that allows scientists to focus on modeling processes instead of tedious computational details. In this work, we describe the modeling services environment, and provide examples of access via command-line, Python programs, and web GUI interfaces.

  14. 40 CFR 63.499 - Back-end process provisions-reporting.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... heater. (i) The flare design (i.e., steam-assisted, air-assisted, or non-assisted); all visible emission... Administrator will specify appropriate reporting and recordkeeping requirements as part of the review of the...

  15. Modeling and simulation of Cu diffusion and drift in porous CMOS backend dielectrics

    NASA Astrophysics Data System (ADS)

    Ali, R.; Fan, Y.; King, S.; Orlowski, M.

    2018-06-01

    With the advent of porous dielectrics, Cu drift-diffusion reliability issues in CMOS backend have only been exacerbated. In this regard, a modeling and simulation study of Cu atom/ion drift-diffusion in porous dielectrics is presented to assess the backend reliability and to explore conditions for a reliable Resistive Random Access Memory (RRAM) operation. The numerical computation, using elementary jump frequencies for a random walk in 2D and 3D, is based on an extended adjacency tensor concept. It is shown that Cu diffusion and drift transport are affected as much by the level of porosity as by the pore morphology. Allowance is made for different rates of Cu dissolution into the dielectric and for Cu absorption and transport at and on the inner walls of the pores. Most of the complex phenomena of the drift-diffusion transport in porous media can be understood in terms of local lateral and vertical gradients and the degree of their perturbation caused by the presence of pores in the transport domain. The impact of pore morphology, related to the concept of tortuosity, is discussed in terms of "channeling" and "trapping" effects. The simulations are calibrated to experimental results of porous SiCOH layers of 25 nm thickness, sandwiched between Cu and Pt(W) electrodes with experimental porosity levels of 0%, 8%, 12%, and 25%. We find that porous SICOH is more immune to Cu+ drift at 300 K than non-porous SICOH.

  16. ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amarasinghe, Saman

    This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less

  17. SLEEC: Semantics-Rich Libraries for Effective Exascale Computation. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milind, Kulkarni

    SLEEC (Semantics-rich Libraries for Effective Exascale Computation) was a project funded by the Department of Energy X-Stack Program, award number DE-SC0008629. The initial project period was September 2012–August 2015. The project was renewed for an additional year, expiring August 2016. Finally, the project received a no-cost extension, leading to a final expiry date of August 2017. Modern applications, especially those intended to run at exascale, are not written from scratch. Instead, they are built by stitching together various carefully-written, hand-tuned libraries. Correctly composing these libraries is difficult, but traditional compilers are unable to effectively analyze and transform across abstraction layers.more » Domain specific compilers integrate semantic knowledge into compilers, allowing them to transform applications that use particular domain-specific languages, or domain libraries. But they do not help when new domains are developed, or applications span multiple domains. SLEEC aims to fix these problems. To do so, we are building generic compiler and runtime infrastructures that are semantics-aware but not domain-specific. By performing optimizations related to the semantics of a domain library, the same infrastructure can be made generic and apply across multiple domains.« less

  18. BrainIACS: a system for web-based medical image processing

    NASA Astrophysics Data System (ADS)

    Kishore, Bhaskar; Bazin, Pierre-Louis; Pham, Dzung L.

    2009-02-01

    We describe BrainIACS, a web-based medical image processing system that permits and facilitates algorithm developers to quickly create extensible user interfaces for their algorithms. Designed to address the challenges faced by algorithm developers in providing user-friendly graphical interfaces, BrainIACS is completely implemented using freely available, open-source software. The system, which is based on a client-server architecture, utilizes an AJAX front-end written using the Google Web Toolkit (GWT) and Java Servlets running on Apache Tomcat as its back-end. To enable developers to quickly and simply create user interfaces for configuring their algorithms, the interfaces are described using XML and are parsed by our system to create the corresponding user interface elements. Most of the commonly found elements such as check boxes, drop down lists, input boxes, radio buttons, tab panels and group boxes are supported. Some elements such as the input box support input validation. Changes to the user interface such as addition and deletion of elements are performed by editing the XML file or by using the system's user interface creator. In addition to user interface generation, the system also provides its own interfaces for data transfer, previewing of input and output files, and algorithm queuing. As the system is programmed using Java (and finally Java-script after compilation of the front-end code), it is platform independent with the only requirements being that a Servlet implementation be available and that the processing algorithms can execute on the server platform.

  19. Tunable MOEMS Fabry-Perot interferometer for miniaturized spectral sensing in near-infrared

    NASA Astrophysics Data System (ADS)

    Rissanen, A.; Mannila, R.; Tuohiniemi, M.; Akujärvi, A.; Antila, J.

    2014-03-01

    This paper presents a novel MOEMS Fabry-Perot interferometer (FPI) process platform for the range of 800 - 1050 nm. Simulation results including design and optimization of device properties in terms of transmission peak width, tuning range and electrical properties are discussed. Process flow for the device fabrication is presented, with overall process integration and backend dicing steps resulting in successful fabrication yield. The mirrors of the FPI consist of LPCVD (low-pressure chemical vapor) deposited polySi-SiN λ/4-thin film Bragg reflectors, with the air gap formed by sacrificial SiO2 etching in HF vapor. Silicon substrate below the optical aperture is removed by inductively coupled plasma (ICP) etching to ensure transmission in the visible - near infra-red (NIR), which is below silicon transmission range. The characterized optical properties of the chips are compared to the simulated values. Achieved optical aperture diameter size enables utilization of the chips in both imaging as well as single-point spectral sensors.

  20. CamBAfx: Workflow Design, Implementation and Application for Neuroimaging

    PubMed Central

    Ooi, Cinly; Bullmore, Edward T.; Wink, Alle-Meije; Sendur, Levent; Barnes, Anna; Achard, Sophie; Aspden, John; Abbott, Sanja; Yue, Shigang; Kitzbichler, Manfred; Meunier, David; Maxim, Voichita; Salvador, Raymond; Henty, Julian; Tait, Roger; Subramaniam, Naresh; Suckling, John

    2009-01-01

    CamBAfx is a workflow application designed for both researchers who use workflows to process data (consumers) and those who design them (designers). It provides a front-end (user interface) optimized for data processing designed in a way familiar to consumers. The back-end uses a pipeline model to represent workflows since this is a common and useful metaphor used by designers and is easy to manipulate compared to other representations like programming scripts. As an Eclipse Rich Client Platform application, CamBAfx's pipelines and functions can be bundled with the software or downloaded post-installation. The user interface contains all the workflow facilities expected by consumers. Using the Eclipse Extension Mechanism designers are encouraged to customize CamBAfx for their own pipelines. CamBAfx wraps a workflow facility around neuroinformatics software without modification. CamBAfx's design, licensing and Eclipse Branding Mechanism allow it to be used as the user interface for other software, facilitating exchange of innovative computational tools between originating labs. PMID:19826470

  1. 40 CFR 63.494 - Back-end process provisions-residual organic HAP limitations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... technology or control or recovery devices. (1) For styrene butadiene rubber produced by the emulsion process... rubber produced by any process other than a solution or emulsion process, polybutadiene rubber produced...

  2. Corn response to nitrogen management under fully-irrigated vs. water-stressed conditions

    USDA-ARS?s Scientific Manuscript database

    Characterizing corn grain yield response to nitrogen (N) fertilizer rate is critical for maximizing profits, optimizing N use efficiency and minimizing environmental impacts. Although a large data base of yield response to N has been compiled for highly productive soils in the upper Midwest U.S., f...

  3. Becoming Little Scientists: Technologically-Enhanced Project-Based Language Learning

    ERIC Educational Resources Information Center

    Dooly, Melinda; Sadler, Randall

    2016-01-01

    This article outlines research into innovative language teaching practices that make optimal use of technology and Computer-Mediated Communication (CMC) for an integrated approach to Project-Based Learning. It is based on data compiled during a 10- week language project that employed videoconferencing and "machinima" (short video clips…

  4. YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste

    Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less

  5. On program restructuring, scheduling, and communication for parallel processor systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, thesemore » algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented.« less

  6. Domain Specific Language Support for Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadayappan, Ponnuswamy

    Domain-Specific Languages (DSLs) offer an attractive path to Exascale software since they provide expressive power through appropriate abstractions and enable domain-specific optimizations. But the advantages of a DSL compete with the difficulties of implementing a DSL, even for a narrowly defined domain. The DTEC project addresses how a variety of DSLs can be easily implemented to leverage existing compiler analysis and transformation capabilities within the ROSE open source compiler as part of a research program focusing on Exascale challenges. The OSU contributions to the DTEC project are in the area of code generation from high-level DSL descriptions, as well asmore » verification of the automatically-generated code.« less

  7. Wide-bandwidth high-resolution search for extraterrestrial intelligence

    NASA Technical Reports Server (NTRS)

    Horowitz, Paul

    1993-01-01

    Research accomplished during the third 6-month period is summarized. Research covered the following: dual-horn antenna performance; high electron mobility transistors (HEMT) low-noise amplifiers; downconverters; fast Fourier transform (FFT) array; and backend 'feature recognizer' array.

  8. Development of the Subaru-Mitaka-Okayama-Kiso Archive System

    NASA Astrophysics Data System (ADS)

    Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatoshi; Watanabe, Masaru; Ozawa, Tomohiko; Hamabe, Masaru

    We have developed the Subaru-Mitaka-Okayama-Kiso-Archive (SMOKA) public science archive system which provides access to the data of the Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory/University of Tokyo. SMOKA is the successor of the MOKA3 system. The user can browse the Quick-Look Images, Header Information (HDI) and the ASCII Table Extension (ATE) of each frame from the search result table. A request for data can be submitted in a simple manner. The system is developed with Java Servlet for the back-end, and Java Server Pages (JSP) for content display. The advantage of JSP's is the separation of the front-end presentation from the middle- and back-end tiers which led to an efficient development of the system. The SMOKA homepage is available at SMOKA

  9. Transactional interactive multimedia banner

    NASA Astrophysics Data System (ADS)

    Shae, Zon-Yin; Wang, Xiping; von Kaenel, Juerg

    2000-05-01

    Advertising in TV broadcasting has shown that multimedia is a very effective means to present merchandise and attract shoppers. This has been applied to the Web by including animated multimedia banner ads on web pages. However, the issues of coupling interactive browsing, shopping, and secure transactions e.g. from inside a multimedia banner, have only recently started to being explored. Currently there is an explosively growing amount of back-end services available (e.g., business to business commerce (B2B), business to consumer (B2C) commerce, and infomercial services) in the Internet. These services are mostly accessible through static HTML web pages at a few specific web portals. In this paper, we will investigate the feasibility of using interactive multimedia banners as pervasive access point for the B2C, B2B, and infomercial services. We present a system architecture that involves a layer of middleware agents functioning as the bridge between the interactive multimedia banners and back-end services.

  10. FPGA-based gating and logic for multichannel single photon counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pooser, Raphael C; Earl, Dennis Duncan; Evans, Philip G

    2012-01-01

    We present results characterizing multichannel InGaAs single photon detectors utilizing gated passive quenching circuits (GPQC), self-differencing techniques, and field programmable gate array (FPGA)-based logic for both diode gating and coincidence counting. Utilizing FPGAs for the diode gating frontend and the logic counting backend has the advantage of low cost compared to custom built logic circuits and current off-the-shelf detector technology. Further, FPGA logic counters have been shown to work well in quantum key distribution (QKD) test beds. Our setup combines multiple independent detector channels in a reconfigurable manner via an FPGA backend and post processing in order to perform coincidencemore » measurements between any two or more detector channels simultaneously. Using this method, states from a multi-photon polarization entangled source are detected and characterized via coincidence counting on the FPGA. Photons detection events are also processed by the quantum information toolkit for application testing (QITKAT)« less

  11. Perl Extension to the Bproc Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grunau, Daryl W.

    2004-06-07

    The Beowulf Distributed process Space (Bproc) software stack is comprised of UNIX/Linux kernel modifications and a support library by which a cluster of machines, each running their own private kernel, can present itself as a unified process space to the user. A Bproc cluster contains a single front-end machine and many back-end nodes which receive and run processes given to them by the front-end. Any process which is migrated to a back-end node is also visible as a ghost process on the fron-end, and may be controlled there using traditional UNIX semantics (e.g. ps(1), kill(1), etc). This software is amore » Perl extension to the Bproc library which enables the Perl programmer to make direct calls to functions within the Bproc library. See http://www.clustermatic.org, http://bproc.sourceforge.net, and http://www.perl.org« less

  12. The versatile GBT astronomical spectrometer (VEGAS): Current status and future plans

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Bloss, Marty; Brandt, Joe; Chen, Hong; Creager, Ray; Demorest, Paul; Ford, John; Jones, Glenn; Kepley, Amanda; Kobelski, Adam; Marganian, Paul; Mello, Melinda; McMahon, David; McCullough, Randy; Ray, Jason; Roshi, D. Anish; Werthimer, Dan; Whitehead, Mark

    2015-07-01

    The VEGAS multi-beam spectrometer (VEGAS) was built for the Green Bank Telescope (GBT) through a partnership between the National Radio Astronomy Observatory (NRAO) and the University of California at Berkeley. VEGAS is based on a Field Programmable Gate Array (FPGA) frontend and a heterogeneous computing backend comprised of Graphical Processing Units (GPUs) and CPUs. This system provides processing power to analyze up to 8 dual-polarization or 16 single-polarization inputs at bandwidths of up to 1.25 GHz per input. VEGAS was released for "shared-risk" observing in March 2014 and it became the default GBT spectral line backend in August 2014. Some of the early VEGAS observations include the Radio Ammonia Mid-Plane Survey, mapping of HCN/HCO+ in nearby galaxies, and a variety of radio-recombination line and pulsar projects. We will present some of the latest VEGAS science highlights.

  13. Model-independent partial wave analysis using a massively-parallel fitting framework

    NASA Astrophysics Data System (ADS)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  14. Effective Vectorization with OpenMP 4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham

    This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less

  15. Qualification and Reliability for MEMS and IC Packages

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2004-01-01

    Advanced IC electronic packages are moving toward miniaturization from two key different approaches, front and back-end processes, each with their own challenges. Successful use of more of the back-end process front-end, e.g. microelectromechanical systems (MEMS) Wafer Level Package (WLP), enable reducing size and cost. Use of direct flip chip die is the most efficient approach if and when the issues of know good die and board/assembly are resolved. Wafer level package solve the issue of known good die by enabling package test, but it has its own limitation, e.g., the I/O limitation, additional cost, and reliability. From the back-end approach, system-in-a-package (SIAP/SIP) development is a response to an increasing demand for package and die integration of different functions into one unit to reduce size and cost and improve functionality. MEMS add another challenging dimension to electronic packaging since they include moving mechanical elements. Conventional qualification and reliability need to be modified and expanded in most cases in order to detect new unknown failures. This paper will review four standards that already released or being developed that specifically address the issues on qualification and reliability of assembled packages. Exposures to thermal cycles, monotonic bend test, mechanical shock and drop are covered in these specifications. Finally, mechanical and thermal cycle qualification data generated for MEMS accelerometer will be presented. The MEMS was an element of an inertial measurement unit (IMU) qualified for NASA Mars Exploration Rovers (MERs), Spirit and Opportunity that successfully is currently roaring the Martian surface

  16. Integrated circuits for volumetric ultrasound imaging with 2-D CMUT arrays.

    PubMed

    Bhuyan, Anshuman; Choe, Jung Woo; Lee, Byung Chul; Wygant, Ira O; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T

    2013-12-01

    Real-time volumetric ultrasound imaging systems require transmit and receive circuitry to generate ultrasound beams and process received echo signals. The complexity of building such a system is high due to requirement of the front-end electronics needing to be very close to the transducer. A large number of elements also need to be interfaced to the back-end system and image processing of a large dataset could affect the imaging volume rate. In this work, we present a 3-D imaging system using capacitive micromachined ultrasonic transducer (CMUT) technology that addresses many of the challenges in building such a system. We demonstrate two approaches in integrating the transducer and the front-end electronics. The transducer is a 5-MHz CMUT array with an 8 mm × 8 mm aperture size. The aperture consists of 1024 elements (32 × 32) with an element pitch of 250 μm. An integrated circuit (IC) consists of a transmit beamformer and receive circuitry to improve the noise performance of the overall system. The assembly was interfaced with an FPGA and a back-end system (comprising of a data acquisition system and PC). The FPGA provided the digital I/O signals for the IC and the back-end system was used to process the received RF echo data (from the IC) and reconstruct the volume image using a phased array imaging approach. Imaging experiments were performed using wire and spring targets, a ventricle model and a human prostrate. Real-time volumetric images were captured at 5 volumes per second and are presented in this paper.

  17. Prospects for high-precision pulsar timing with the new Effelsberg PSRIX backend

    NASA Astrophysics Data System (ADS)

    Lazarus, P.; Karuppusamy, R.; Graikou, E.; Caballero, R. N.; Champion, D. J.; Lee, K. J.; Verbiest, J. P. W.; Kramer, M.

    2016-05-01

    The PSRIX backend is the primary pulsar timing instrument of the Effelsberg 100 m radio telescope since early 2011. This new ROACH-based system enables bandwidths up to 500 MHz to be recorded, significantly more than what was possible with its predecessor, the Effelsberg-Berkeley Pulsar Processor (EBPP). We review the first four years of PSRIX timing data for 33 pulsars collected as part of the monthly European Pulsar Timing Array (EPTA) observations. We describe the automated data analysis pipeline, COASTGUARD, that we developed to reduce these observations. We also introduce TOASTER, the EPTA timing data base, used to store timing results, processing information and observation metadata. Using these new tools, we measure the phase-averaged flux densities at 1.4 GHz of all 33 pulsars. For seven of these pulsars, our flux density measurements are the first values ever reported. For the other 26 pulsars, we compare our flux density measurements with previously published values. By comparing PSRIX data with EBPP data, we find an improvement of ˜2-5 times in signal-to-noise ratio, which translates to an increase of ˜2-5 times in pulse time-of-arrival (TOA) precision. We show that such an improvement in TOA precision will improve the sensitivity to the stochastic gravitational wave background. Finally, we showcase the flexibility of the new PSRIX backend by observing several millisecond-period pulsars (MSPs) at 5 and 9 GHz. Motivated by our detections, we discuss the potential for complementing existing pulsar timing array data sets with MSP monitoring campaigns at these higher frequencies.

  18. Ultra-Wideband Optical Modulation Spectrometer (OMS) Development

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan (Technical Monitor); Tolls, Volker

    2004-01-01

    The optical modulation spectrometer (OMS) is a novel, highly efficient, low mass backend for heterodyne receiver systems. Current and future heterodyne receiver systems operating at frequencies up to a few THz require broadband spectrometer backends to achieve spectral resolutions of R approximately 10(exp 5) to 10(exp 6) to carry out many important astronomical investigations. Among these are observations of broad emission and absorption lines from extra-galactic objects at high redshifts, spectral line surveys, and observations of planetary atmospheres. Many of these lines are pressure or velocity broadened with either large half-widths or line wings extending over several GHz. Current backend systems can cover the needed bandwidth only by combining the output of several spectrometers, each with typically up to 1 GHz bandwidth, or by combining several frequency-shifted spectra taken with a single spectrometer. An ultra-wideband optical modulation spectrometer with 10 - 40 GHz bandwidth will enable broadband ob- servations without the limitations and disadvantages of hybrid spectrometers. Spectrometers like the OMS will be important for both ground-based observatories and future space missions like the Single Aperture Far-Infrared Telescope (SAFIR) which might carry IR/submm array heterodyne receiver systems requiring a spectrometer for each array pixel. Small size, low mass and small power consumption are extremely important for space missions. This report summarizes the specifications developed for the OMS and lists already identified commercial parts. The report starts with a review of the principle of operation, then describes the most important components and their specifications which were derived from theory, and finishes with a conclusion and outlook.

  19. Parallel Performance of a Combustion Chemistry Simulation

    DOE PAGES

    Skinner, Gregg; Eigenmann, Rudolf

    1995-01-01

    We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.

  20. A Selective Encryption Algorithm Based on AES for Medical Information.

    PubMed

    Oh, Ju-Young; Yang, Dong-Il; Chon, Ki-Hwan

    2010-03-01

    The transmission of medical information is currently a daily routine. Medical information needs efficient, robust and secure encryption modes, but cryptography is primarily a computationally intensive process. Towards this direction, we design a selective encryption scheme for critical data transmission. We expand the advandced encrytion stanard (AES)-Rijndael with five criteria: the first is the compression of plain data, the second is the variable size of the block, the third is the selectable round, the fourth is the optimization of software implementation and the fifth is the selective function of the whole routine. We have tested our selective encryption scheme by C(++) and it was compiled with Code::Blocks using a MinGW GCC compiler. The experimental results showed that our selective encryption scheme achieves a faster execution speed of encryption/decryption. In future work, we intend to use resource optimization to enhance the round operations, such as SubByte/InvSubByte, by exploiting similarities between encryption and decryption. As encryption schemes become more widely used, the concept of hardware and software co-design is also a growing new area of interest.

  1. A Selective Encryption Algorithm Based on AES for Medical Information

    PubMed Central

    Oh, Ju-Young; Chon, Ki-Hwan

    2010-01-01

    Objectives The transmission of medical information is currently a daily routine. Medical information needs efficient, robust and secure encryption modes, but cryptography is primarily a computationally intensive process. Towards this direction, we design a selective encryption scheme for critical data transmission. Methods We expand the advandced encrytion stanard (AES)-Rijndael with five criteria: the first is the compression of plain data, the second is the variable size of the block, the third is the selectable round, the fourth is the optimization of software implementation and the fifth is the selective function of the whole routine. We have tested our selective encryption scheme by C++ and it was compiled with Code::Blocks using a MinGW GCC compiler. Results The experimental results showed that our selective encryption scheme achieves a faster execution speed of encryption/decryption. In future work, we intend to use resource optimization to enhance the round operations, such as SubByte/InvSubByte, by exploiting similarities between encryption and decryption. Conclusions As encryption schemes become more widely used, the concept of hardware and software co-design is also a growing new area of interest. PMID:21818420

  2. Radiation and Scattering Compact Antenna Laboratory (RASCAL) Capabilities Brochure

    DTIC Science & Technology

    2016-09-06

    Array Measurements Integrated Measurement of Subsystems with Digital Backends RADIATION AND SCATTERING COMPACT ANTENNA LABORATORY...hardware gating to eliminate sources of error within the range itself. Processing is also available for multi-arm spiral antennas for the generation

  3. Interactive Model-Centric Systems Engineering (IMCSE) Phase Two

    DTIC Science & Technology

    2015-02-28

    109 Backend Implementation...42 Figure 10. Interactive Epoch-Era Analysis leverages humans-in-the-loop analysis and supporting infrastructure ...preliminary supporting 10 infrastructure . This will inform the transition strategies, additional case application and prototype user testing. • The

  4. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  5. 77 FR 72368 - Privacy Act of 1974; Notice of a New System of Records, Enterprise Wide Operations Data Store

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ...-end repository to manage various reporting, pooling, and risk management activities associated with... records is to serve as a central back-end repository to house loan origination and servicing, security...

  6. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  7. HERCULES: A Pattern Driven Code Transformation System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kartsaklis, Christos; Hernandez, Oscar R; Hsu, Chung-Hsing

    2012-01-01

    New parallel computers are emerging, but developing efficient scientific code for them remains difficult. A scientist must manage not only the science-domain complexity but also the performance-optimization complexity. HERCULES is a code transformation system designed to help the scientist to separate the two concerns, which improves code maintenance, and facilitates performance optimization. The system combines three technologies, code patterns, transformation scripts and compiler plugins, to provide the scientist with an environment to quickly implement code transformations that suit his needs. Unlike existing code optimization tools, HERCULES is unique in its focus on user-level accessibility. In this paper we discuss themore » design, implementation and an initial evaluation of HERCULES.« less

  8. Teacher Educators Developing Professional Roles: Frictions between Current and Optimal Practices

    ERIC Educational Resources Information Center

    Meeus, Wil; Cools, Wouter; Placklé, Inge

    2018-01-01

    This article reports on a study of the professional learning of Flemish teacher educators. In the first part, an exemplary survey was conducted in order to compile an inventory of the existing types of education initiatives for teacher educators in Flanders. An electronic survey was then conducted in order to identify the professional needs of…

  9. A comparison of two rough mill cutting models

    Treesearch

    Steven Ruddell; Henry Huber; Powsiri Klinkhachorn

    1990-01-01

    A comparison of lumber yield using the Automated Lumber Processing System (ALPS) Cutting Program and the Optimal Furniture Cutting Program (OFCP) was conducted on eight cutting bills. No.1 Common grade hard maple data files were compiled using a board database collected and used by the USDA Forest Service's Forest Products Laboratory to develop standard hardwood...

  10. The implementation of POSTGRES

    NASA Technical Reports Server (NTRS)

    Stonebraker, Michael; Rowe, Lawrence A.; Hirohama, Michael

    1990-01-01

    The design and implementation decisions made for the three-dimensional data manager POSTGRES are discussed. Attention is restricted to the DBMS backend functions. The POSTGRES data model and query language, the rules system, the storage system, the POSTGRES implementation, and the current status and performance are discussed.

  11. Back-end Science Model Integration for Ecological Risk Assessment

    EPA Science Inventory

    The U.S. Environmental Protection Agency (USEPA) relies on a number of ecological risk assessment models that have been developed over 30-plus years of regulating pesticide exposure and risks under Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Endangered Spe...

  12. Back-end Science Model Integration for Ecological Risk Assessment.

    EPA Science Inventory

    The U.S. Environmental Protection Agency (USEPA) relies on a number of ecological risk assessment models that have been developed over 30-plus years of regulating pesticide exposure and risks under Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Endangered Spe...

  13. Cache Locality Optimization for Recursive Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lifflander, Jonathan; Krishnamoorthy, Sriram

    We present an approach to optimize the cache locality for recursive programs by dynamically splicing--recursively interleaving--the execution of distinct function invocations. By utilizing data effect annotations, we identify concurrency and data reuse opportunities across function invocations and interleave them to reduce reuse distance. We present algorithms that efficiently track effects in recursive programs, detect interference and dependencies, and interleave execution of function invocations using user-level (non-kernel) lightweight threads. To enable multi-core execution, a program is parallelized using a nested fork/join programming model. Our cache optimization strategy is designed to work in the context of a random work stealing scheduler. Wemore » present an implementation using the MIT Cilk framework that demonstrates significant improvements in sequential and parallel performance, competitive with a state-of-the-art compile-time optimizer for loop programs and a domain- specific optimizer for stencil programs.« less

  14. System, apparatus and methods to implement high-speed network analyzers

    DOEpatents

    Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E

    2015-11-10

    Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.

  15. Analysis of fractionation in corn-to-ethanol plants

    NASA Astrophysics Data System (ADS)

    Nelson, Camille

    As the dry grind ethanol industry has grown, the research and technology surrounding ethanol production and co-product value has increased. Including use of back-end oil extraction and front-end fractionation. Front-end fractionation is pre-fermentation separation of the corn kernel into 3 fractions: endosperm, bran, and germ. The endosperm fraction enters the existing ethanol plant, and a high protein DDGS product remains after fermentation. High value oil is extracted out of the germ fraction. This leaves corn germ meal and bran as co-products from the other two streams. These 3 co-products have a very different composition than traditional corn DDGS. Installing this technology allows ethanol plants to increase profitability by tapping into more diverse markets, and ultimately could allow for an increase in profitability. An ethanol plant model was developed to evaluate both back-end oil extraction and front-end fractionation technology and predict the change in co-products based on technology installed. The model runs in Microsoft Excel and requires inputs of whole corn composition (proximate analysis), amino acid content, and weight to predict the co-product quantity and quality. User inputs include saccharification and fermentation efficiencies, plant capacity, and plant process specifications including front-end fractionation and backend oil extraction, if applicable. This model provides plants a way to assess and monitor variability in co-product composition due to the variation in whole corn composition. Additionally the co-products predicted in this model are entered into the US Pork Center of Excellence, National Swine Nutrition Guide feed formulation software. This allows the plant user and animal nutritionists to evaluate the value of new co-products in existing animal diets.

  16. An FPGA-Based High-Speed Error Resilient Data Aggregation and Control for High Energy Physics Experiment

    NASA Astrophysics Data System (ADS)

    Mandal, Swagata; Saini, Jogender; Zabołotny, Wojciech M.; Sau, Suman; Chakrabarti, Amlan; Chattopadhyay, Subhasis

    2017-03-01

    Due to the dramatic increase of data volume in modern high energy physics (HEP) experiments, a robust high-speed data acquisition (DAQ) system is very much needed to gather the data generated during different nuclear interactions. As the DAQ works under harsh radiation environment, there is a fair chance of data corruption due to various energetic particles like alpha, beta, or neutron. Hence, a major challenge in the development of DAQ in the HEP experiment is to establish an error resilient communication system between front-end sensors or detectors and back-end data processing computing nodes. Here, we have implemented the DAQ using field-programmable gate array (FPGA) due to some of its inherent advantages over the application-specific integrated circuit. A novel orthogonal concatenated code and cyclic redundancy check (CRC) have been used to mitigate the effects of data corruption in the user data. Scrubbing with a 32-b CRC has been used against error in the configuration memory of FPGA. Data from front-end sensors will reach to the back-end processing nodes through multiple stages that may add an uncertain amount of delay to the different data packets. We have also proposed a novel memory management algorithm that helps to process the data at the back-end computing nodes removing the added path delays. To the best of our knowledge, the proposed FPGA-based DAQ utilizing optical link with channel coding and efficient memory management modules can be considered as first of its kind. Performance estimation of the implemented DAQ system is done based on resource utilization, bit error rate, efficiency, and robustness to radiation.

  17. OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark

    NASA Astrophysics Data System (ADS)

    Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.

    2015-02-01

    We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.

  18. Database Organisation in a Web-Enabled Free and Open-Source Software (foss) Environment for Spatio-Temporal Landslide Modelling

    NASA Astrophysics Data System (ADS)

    Das, I.; Oberai, K.; Sarathi Roy, P.

    2012-07-01

    Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.

  19. 40 CFR 63.495 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...

  20. 40 CFR 63.495 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...

  1. 40 CFR 63.495 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...

  2. 40 CFR 63.495 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...

  3. A Summary of the Naval Postgraduate School Research Program

    DTIC Science & Technology

    1989-08-30

    5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database

  4. Global EOS: exploring the 300-ms-latency region

    NASA Astrophysics Data System (ADS)

    Mascetti, L.; Jericho, D.; Hsu, C.-Y.

    2017-10-01

    EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.

  5. New instrumentation for the 1.2m Southern Millimeter Wave Telescope (SMWT)

    NASA Astrophysics Data System (ADS)

    Vasquez, P.; Astudillo, P.; Rodriguez, R.; Monasterio, D.; Reyes, N.; Finger, R.; Mena, F. P.; Bronfman, L.

    2016-07-01

    Here we describe the status of the upgrade program that is being performed to modernize the Southern 1.2m Wave Telescope. The Telescope was built during early ´80 to complete the first Galactic survey of Molecular Clouds in the CO(1-0) line. After a fruitful operation in CTIO the telescope was relocated to the Universidad de Chile, Cerro Calán Observatory. The new site has an altitude of 850m and allows observations in the millimeter range throughout the year. The telescope was upgraded, including a new building to house operations, new control system, and new receiver and back-end technologies. The new front end is a sideband-separating receiver based on a HEMT amplifier and sub-harmonic mixers. It is cooled with Liquid Nitrogen to diminish its noise temperature. The back-end is a digital spectrometer, based on the Reconfigurable Open Architecture Computing Hardware (ROACH). The new spectrometer includes IF hybridization capabilities to avoid analog hybrids and, therefore, improve the sideband rejection ratio of the receiver.

  6. Scalable global grid catalogue for Run3 and beyond

    NASA Astrophysics Data System (ADS)

    Martinez Pedreira, M.; Grigoras, C.; ALICE Collaboration

    2017-10-01

    The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the system, together with the technology evaluation, benchmark results and conclusions.

  7. Equalizer design techniques for dispersive cables with application to the SPS wideband kicker

    NASA Astrophysics Data System (ADS)

    Platt, Jason; Hofle, Wolfgang; Pollock, Kristin; Fox, John

    2017-10-01

    A wide-band vertical instability feedback control system in development at CERN requires 1-1.5 GHz of bandwidth for the entire processing chain, from the beam pickups through the feedback signal digital processing to the back-end power amplifiers and kicker structures. Dispersive effects in cables, amplifiers, pickup and kicker elements can result in distortions in the time domain signal as it proceeds through the processing system, and deviations from linear phase response reduce the allowable bandwidth for the closed-loop feedback system. We have developed an equalizer analog circuit that compensates for these dispersive effects. Here we present a design technique for the construction of an analog equalizer that incorporates the effect of parasitic circuit elements in the equalizer to increase the fidelity of the implemented equalizer. Finally, we show results from the measurement of an assembled backend equalizer that corrects for dispersive elements in the cables over a bandwidth of 10-1000 MHz.

  8. Taipower`s radioactive waste management program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B.C.C.

    1996-09-01

    Nuclear safety and radioactive waste management are the two major concerns of nuclear power in Taiwan. Recognizing that it is an issue imbued with political and social-economic concerns, Taipower has established an integrated nuclear backend management system and its associated financial and mechanism. For LLW, the Orchid Island storage facility will play an important role in bridging the gap between on-site storage and final disposal of LLW. Also, on-site interim storage of spent fuel for 40 years or longer will provide Taipower with ample time and flexibility to adopt the suitable alternative of direct disposal or reprocessing. In other words,more » by so exercising interim storage option, Taipower will be in a comfortable position to safely and permanently dispose of radwaste without unduly forgoing the opportunities of adopting better technologies or alternatives. Furthermore, Taipower will spare no efforts to communicate with the general public and make her nuclear backend management activities accountable to them.« less

  9. A Flexible Monitoring Infrastructure for the Simulation Requests

    NASA Astrophysics Data System (ADS)

    Spinoso, V.; Missiato, M.

    2014-06-01

    Running and monitoring simulations usually involves several different aspects of the entire workflow: the configuration of the job, the site issues, the software deployment at the site, the file catalogue, the transfers of the simulated data. In addition, the final product of the simulation is often the result of several sequential steps. This project tries a different approach to monitoring the simulation requests. All the necessary data are collected from the central services which lead the submission of the requests and the data management, and stored by a backend into a NoSQL-based data cache; those data can be queried through a Web Service interface, which returns JSON responses, and allows users, sites, physics groups to easily create their own web frontend, aggregating only the needed information. As an example, it will be shown how it is possible to monitor the CMS services (ReqMgr, DAS/DBS, PhEDEx) using a central backend and multiple customized cross-language frontends.

  10. Development of Advanced Nuclide Separation and Recovery Methods using Ion-Exchanhge Techniques in Nuclear Backend

    NASA Astrophysics Data System (ADS)

    Miura, Hitoshi

    The development of compact separation and recovery methods using selective ion-exchange techniques is very important for the reprocessing and high-level liquid wastes (HLLWs) treatment in the nuclear backend field. The selective nuclide separation techniques are effective for the volume reduction of wastes and the utilization of valuable nuclides, and expected for the construction of advanced nuclear fuel cycle system and the rationalization of waste treatment. In order to accomplish the selective nuclide separation, the design and synthesis of novel adsorbents are essential for the development of compact and precise separation processes. The present paper deals with the preparation of highly functional and selective hybrid microcapsules enclosing nano-adsorbents in the alginate gel polymer matrices by sol-gel methods, their characterization and the clarification of selective adsorption properties by batch and column methods. The selective separation of Cs, Pd and Re in real HLLW was further accomplished by using novel microcapsules, and an advanced nuclide separation system was proposed by the combination of selective processes using microcapsules.

  11. Data Mining as a Service (DMaaS)

    NASA Astrophysics Data System (ADS)

    Tejedor, E.; Piparo, D.; Mascetti, L.; Moscicki, J.; Lamanna, M.; Mato, P.

    2016-10-01

    Data Mining as a Service (DMaaS) is a software and computing infrastructure that allows interactive mining of scientific data in the cloud. It allows users to run advanced data analyses by leveraging the widely adopted Jupyter notebook interface. Furthermore, the system makes it easier to share results and scientific code, access scientific software, produce tutorials and demonstrations as well as preserve the analyses of scientists. This paper describes how a first pilot of the DMaaS service is being deployed at CERN, starting from the notebook interface that has been fully integrated with the ROOT analysis framework, in order to provide all the tools for scientists to run their analyses. Additionally, we characterise the service backend, which combines a set of IT services such as user authentication, virtual computing infrastructure, mass storage, file synchronisation, development portals or batch systems. The added value acquired by the combination of the aforementioned categories of services is discussed, focusing on the opportunities offered by the CERNBox synchronisation service and its massive storage backend, EOS.

  12. Milestones of mathematical model for business process management related to cost estimate documentation in petroleum industry

    NASA Astrophysics Data System (ADS)

    Khamidullin, R. I.

    2018-05-01

    The paper is devoted to milestones of the optimal mathematical model for a business process related to cost estimate documentation compiled during construction and reconstruction of oil and gas facilities. It describes the study and analysis of fundamental issues in petroleum industry, which are caused by economic instability and deterioration of a business strategy. Business process management is presented as business process modeling aimed at the improvement of the studied business process, namely main criteria of optimization and recommendations for the improvement of the above-mentioned business model.

  13. NASA Electronic Library System (NELS) optimization

    NASA Technical Reports Server (NTRS)

    Pribyl, William L.

    1993-01-01

    This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.

  14. Making extreme computations possible with virtual machines

    NASA Astrophysics Data System (ADS)

    Reuter, J.; Chokoufe Nejad, B.; Ohl, T.

    2016-10-01

    State-of-the-art algorithms generate scattering amplitudes for high-energy physics at leading order for high-multiplicity processes as compiled code (in Fortran, C or C++). For complicated processes the size of these libraries can become tremendous (many GiB). We show that amplitudes can be translated to byte-code instructions, which even reduce the size by one order of magnitude. The byte-code is interpreted by a Virtual Machine with runtimes comparable to compiled code and a better scaling with additional legs. We study the properties of this algorithm, as an extension of the Optimizing Matrix Element Generator (O'Mega). The bytecode matrix elements are available as alternative input for the event generator WHIZARD. The bytecode interpreter can be implemented very compactly, which will help with a future implementation on massively parallel GPUs.

  15. Spaceport Command and Control System Software Development

    NASA Technical Reports Server (NTRS)

    Mahlin, Jonathan Nicholas

    2017-01-01

    There is an immense challenge in organizing personnel across a large agency such as NASA, or even over a subset of that, like a center's Engineering directorate. Workforce inefficiencies and challenges are bound to grow over time without oversight and management. It is also not always possible to hire new employees to fill workforce gaps, therefore available resources must be utilized more efficiently. The goal of this internship was to develop software that improves organizational efficiency by aiding managers, making employee information viewable and editable in an intuitive manner. This semester I created an application for managers that aids in optimizing allocation of employee resources for a single division with the possibility of scaling upwards. My duties this semester consisted of developing frontend and backend software to complete this task. The application provides user-friendly information displays and documentation of the workforce to allow NASA to track diligently track the status and skills of its workforce. This tool should be able to prove that current employees are being effectively utilized and if new hires are necessary to fulfill skill gaps.

  16. Efficient data management tools for the heterogeneous big data warehouse

    NASA Astrophysics Data System (ADS)

    Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.

    2016-09-01

    The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.

  17. Multiparadigm Design Environments

    DTIC Science & Technology

    1992-01-01

    following results: 1. New methods for programming in terms of conceptual models 2. Design of object-oriented languages 3. Compiler optimization and...experimented with object-based methods for programming directly in terms of conceptual models, object-oriented language design, computer program...expect the3e results to have a strong influence on future ,,j :- ...... L ! . . • a mm ammmml ll Illlll • l I 1 Conceptual Programming Conceptual

  18. General Algebraic Modeling System Tutorial | High-Performance Computing |

    Science.gov Websites

    power generation from two different fuels. The goal is to minimize the cost for one of the fuels while Here's a basic tutorial for modeling optimization problems with the General Algebraic Modeling System (GAMS). Overview The GAMS (General Algebraic Modeling System) package is essentially a compiler for a

  19. Quality standards for predialysis education: results from a consensus conference

    PubMed Central

    Isnard Bagnis, Corinne; Crepaldi, Carlo; Dean, Jessica; Goovaerts, Tony; Melander, Stefan; Nilsson, Eva-Lena; Prieto-Velasco, Mario; Trujillo, Carmen; Zambon, Roberto; Mooney, Andrew

    2015-01-01

    This position statement was compiled following an expert meeting in March 2013, Zurich, Switzerland. Attendees were invited from a spread of European renal units with established and respected renal replacement therapy option education programmes. Discussions centred around optimal ways of creating an education team, setting realistic and meaningful objectives for patient education, and assessing the quality of education delivered. PMID:24957808

  20. 78 FR 4211 - Setting and Adjusting Patent Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-18

    ... United States economy depends on high quality and timely patents to protect new ideas and investments for... described in Part III of this final rule, namely, fostering innovation, facilitating effective... recovering revenue from back-end fees, the final fee schedule continues to foster innovation and ease access...

  1. Targeting multiple heterogeneous hardware platforms with OpenCL

    NASA Astrophysics Data System (ADS)

    Fox, Paul A.; Kozacik, Stephen T.; Humphrey, John R.; Paolini, Aaron; Kuller, Aryeh; Kelmelis, Eric J.

    2014-06-01

    The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations have substantial implementation differences. The abstractions provided by the OpenCL API are often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations often do not take advantage of potential performance gains from certain features due to hardware limitations and other factors. These factors make it challenging to produce code that is portable in practice, resulting in much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort offsets the principal advantage of OpenCL: portability. The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted to perform well across a wide range of hardware platforms. To this end, we explore some general practices for producing performant code that are effective across platforms. Additionally, we explore some ways of modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics. The minimum requirement for portability implies avoiding the use of OpenCL features that are optional, not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down to explicit vector operations. Static optimizations and branch elimination in device code help the platform compiler to effectively optimize programs. Modularization of some code is important to allow operations to be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in hardware-specific optimizations as necessary.

  2. A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics

    NASA Astrophysics Data System (ADS)

    Poya, Roman; Gil, Antonio J.; Ortigosa, Rogelio

    2017-07-01

    The paper presents aspects of implementation of a new high performance tensor contraction framework for the numerical analysis of coupled and multi-physics problems on streaming architectures. In addition to explicit SIMD instructions and smart expression templates, the framework introduces domain specific constructs for the tensor cross product and its associated algebra recently rediscovered by Bonet et al. (2015, 2016) in the context of solid mechanics. The two key ingredients of the presented expression template engine are as follows. First, the capability to mathematically transform complex chains of operations to simpler equivalent expressions, while potentially avoiding routes with higher levels of computational complexity and, second, to perform a compile time depth-first or breadth-first search to find the optimal contraction indices of a large tensor network in order to minimise the number of floating point operations. For optimisations of tensor contraction such as loop transformation, loop fusion and data locality optimisations, the framework relies heavily on compile time technologies rather than source-to-source translation or JIT techniques. Every aspect of the framework is examined through relevant performance benchmarks, including the impact of data parallelism on the performance of isomorphic and nonisomorphic tensor products, the FLOP and memory I/O optimality in the evaluation of tensor networks, the compilation cost and memory footprint of the framework and the performance of tensor cross product kernels. The framework is then applied to finite element analysis of coupled electro-mechanical problems to assess the speed-ups achieved in kernel-based numerical integration of complex electroelastic energy functionals. In this context, domain-aware expression templates combined with SIMD instructions are shown to provide a significant speed-up over the classical low-level style programming techniques.

  3. Wide-bandwidth high-resolution search for extraterrestrial intelligence

    NASA Technical Reports Server (NTRS)

    Horowitz, Paul

    1992-01-01

    This interim report summarizes the research accomplished during the initial 6-month period of the grant. Activities associated with antenna configurations, the channelizing downconverter, the fast Fourier transform array, the DSP (digital signal processing) array, and the backend and UNIX workstation are discussed. Publications submitted during the reporting period are listed.

  4. Space Images for NASA/JPL

    NASA Technical Reports Server (NTRS)

    Boggs, Karen; Gutheinz, Sandy C.; Watanabe, Susan M.; Oks, Boris; Arca, Jeremy M.; Stanboli, Alice; Peez, Martin; Whatmore, Rebecca; Kang, Minliang; Espinoza, Luis A.

    2010-01-01

    Space Images for NASA/JPL is an Apple iPhone application that allows the general public to access featured images from the Jet Propulsion Laboratory (JPL). A back-end infrastructure stores, tracks, and retrieves space images from the JPL Photojournal Web server, and catalogs the information into a streamlined rating infrastructure.

  5. 40 CFR 63.494 - Back-end process provisions-residual organic HAP and emission limitations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... producing butyl rubber, epichlorohydrin elastomer, neoprene, and nitrile butadiene rubber shall not exceed... processes at affected sources producing butyl rubber, epichlorohydrin elastomer, neoprene, and nitrile... submitted in accordance with § 63.499(f)(1). (i) For butyl rubber, the organic HAP emission limitation shall...

  6. A Flexible and Configurable Architecture for Automatic Control Remote Laboratories

    ERIC Educational Resources Information Center

    Kalúz, Martin; García-Zubía, Javier; Fikar, Miroslav; Cirka, Luboš

    2015-01-01

    In this paper, we propose a novel approach in hardware and software architecture design for implementation of remote laboratories for automatic control. In our contribution, we show the solution with flexible connectivity at back-end, providing features of multipurpose usage with different types of experimental devices, and fully configurable…

  7. Netmeld v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BERG, MICHAEL; RILEY, MARSHALL

    System assessments typically yield large quantities of data from disparate sources for an analyst to scrutinize for issues. Netmeld is used to parse input from different file formats, store the data in a common format, allow users to easily query it, and enable analysts to tie different analysis tools together using a common back-end.

  8. VizieR Online Data Catalog: Molecular clumps in W51 giant molecular cloud (Parsons+, 2012)

    NASA Astrophysics Data System (ADS)

    Parsons, H.; Thompson, M. A.; Clark, J. S.; Chrysostomou, A.

    2013-04-01

    The W51 GMC was mapped using the Heterodyne Array Receiver Programme (HARP) receiver with the back-end digital autocorrelator spectrometer Auto-Correlation Spectral Imaging System (ACSIS) on the James Clerk Maxwell Telescope (JCMT). Data were taken in 2008 May. (2 data files).

  9. Pipeline Optimization Program (PLOP)

    DTIC Science & Technology

    2006-08-01

    the framework of the Dredging Operations Decision Support System (DODSS, https://dodss.wes.army.mil/wiki/0). PLOP compiles industry standards and...efficiency point ( BEP ). In the interest of acceptable wear rate on the pump, industrial standards dictate that the flow Figure 2. Pump class as a function of...percentage of the flow rate corresponding to the BEP . Pump Acceptability Rules. The facts for pump performance, industrial standards and pipeline and

  10. The Hermod Behavioral Synthesis System

    DTIC Science & Technology

    1988-06-08

    LDescription 1 lib tech-independent Transformation & Parser Optimization lib Hardware • g - utSynhesze Generator li Datapath lb Hardware liCotllb...Proc. 22nd Design Automation Conference, ACM/IEEE, June 1985, pp. 475-481. [7] G . De Micheli, "Synthesis of Control Systems", in Design Systems for...VLSI Circuits: Logic Synthesis and Silicon Compilation, G . De Micheli, A. Sangiovanni-Vincentelli, and P. Antognetti, (editor), Martinus Nijhoff

  11. Machine-learned and codified synthesis parameters of oxide materials

    NASA Astrophysics Data System (ADS)

    Kim, Edward; Huang, Kevin; Tomala, Alex; Matthews, Sara; Strubell, Emma; Saunders, Adam; McCallum, Andrew; Olivetti, Elsa

    2017-09-01

    Predictive materials design has rapidly accelerated in recent years with the advent of large-scale resources, such as materials structure and property databases generated by ab initio computations. In the absence of analogous ab initio frameworks for materials synthesis, high-throughput and machine learning techniques have recently been harnessed to generate synthesis strategies for select materials of interest. Still, a community-accessible, autonomously-compiled synthesis planning resource which spans across materials systems has not yet been developed. In this work, we present a collection of aggregated synthesis parameters computed using the text contained within over 640,000 journal articles using state-of-the-art natural language processing and machine learning techniques. We provide a dataset of synthesis parameters, compiled autonomously across 30 different oxide systems, in a format optimized for planning novel syntheses of materials.

  12. RTE: A UNIX library with on-line documentation and sample programs for microwave radiative transfer calculations

    NASA Astrophysics Data System (ADS)

    Reynolds, J. C.; Schroeder, J. A.

    1993-03-01

    The FORTRAN library that the NOAA Wave Propagation Laboratory (WPL) developed to perform radiative transfer calculations for an upward-looking microwave radiometer is described. Although the theory and algorithms have been used for many years in WPL radiometer research, the Radiative Transfer Equation (RTE) software has combined them into a toolbox that is portable, readable, application independent, and easy to update. RTE has been optimized for the UNIX environment. However, the FORTRAN source code can be compiled on any platform that provides a Standard FORTRAN 77 compiler. RTE allows a user to do cloud modeling, calibrate radiometers, simulate hypothetical radiometer systems, develop retrieval techniques, and compute weighting functions. The radiative transfer model used is valid for channel frequencies below 1000 GHz in clear conditions and for frequencies below 100 GHz when clouds are present.

  13. Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D J; Willcock, J J

    2008-12-12

    Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less

  14. Optimizing a mobile robot control system using GPU acceleration

    NASA Astrophysics Data System (ADS)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  15. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  16. Fringe pattern demodulation using the one-dimensional continuous wavelet transform: field-programmable gate array implementation.

    PubMed

    Abid, Abdulbasit

    2013-03-01

    This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.

  17. Engineering of Data Acquiring Mobile Software and Sustainable End-User Applications

    NASA Technical Reports Server (NTRS)

    Smith, Benton T.

    2013-01-01

    The criteria for which data acquiring software and its supporting infrastructure should be designed should take the following two points into account: the reusability and organization of stored online and remote data and content, and an assessment on whether abandoning a platform optimized design in favor for a multi-platform solution significantly reduces the performance of an end-user application. Furthermore, in-house applications that control or process instrument acquired data for end-users should be designed with a communication and control interface such that the application's modules can be reused as plug-in modular components in greater software systems. The application of the above mentioned is applied using two loosely related projects: a mobile application, and a website containing live and simulated data. For the intelligent devices mobile application AIDM, the end-user interface have a platform and data type optimized design, while the database and back-end applications store this information in an organized manner and manage access to that data to only to authorized user end application(s). Finally, the content for the website was derived from a database such that the content can be included and uniform to all applications accessing the content. With these projects being ongoing, I have concluded from my research that the applicable methods presented are feasible for both projects, and that a multi-platform design for the mobile application only marginally drop the performance of the mobile application.

  18. 40 CFR 63.496 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... methods of determining this quantity are production records, measurement of stream characteristics, and... HAP (or TOC, minus methane and ethane) emissions in all process vent streams and primary and secondary... heater. (B) Paragraph (b)(5)(iii) of this section is applicable, except that TOC (minus methane and...

  19. 40 CFR 63.496 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... methods of determining this quantity are production records, measurement of stream characteristics, and... HAP (or TOC, minus methane and ethane) emissions in all process vent streams and primary and secondary... heater. (B) Paragraph (b)(5)(iii) of this section is applicable, except that TOC (minus methane and...

  20. 40 CFR 63.496 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... methods of determining this quantity are production records, measurement of stream characteristics, and... HAP (or TOC, minus methane and ethane) emissions in all process vent streams and primary and secondary... heater. (B) Paragraph (b)(5)(iii) of this section is applicable, except that TOC (minus methane and...

  1. 40 CFR 63.496 - Back-end process provisions-procedures to determine compliance with residual organic HAP...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... methods of determining this quantity are production records, measurement of stream characteristics, and... HAP (or TOC, minus methane and ethane) emissions in all process vent streams and primary and secondary... heater. (B) Paragraph (b)(5)(iii) of this section is applicable, except that TOC (minus methane and...

  2. 40 CFR 63.494 - Back-end process provisions-residual organic HAP and emission limitations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... produced by the emulsion process, polybutadiene rubber and styrene butadiene rubber produced by the... styrene butadiene rubber produced by the emulsion process: (i) A monthly weighted average of 0.40 kg... than a solution or emulsion process, polybutadiene rubber produced by any process other than a solution...

  3. 76 FR 46313 - Notice of Issuance of Final Determination Concerning Iridium Satellite Telephones

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-02

    ... modulates them into radio streams that communicate with the Iridium gateway network infrastructure using a... (DSP) cores, made in China, and two radio frequency (RF) backend chips, made in Taiwan. The bill of... marking of a cellular phone. CBP found that a digital mobile telephone was substantially transformed in...

  4. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  5. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  6. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  7. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  8. An Autonomic Framework for Integrating Security and Quality of Service Support in Databases

    ERIC Educational Resources Information Center

    Alomari, Firas

    2013-01-01

    The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…

  9. 40 CFR 63.498 - Back-end process provisions-recordkeeping.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... be the crumb rubber dry weight of the rubber leaving the stripper. (iv) The organic HAP content of... be the crumb rubber dry weight of the crumb rubber leaving the stripper. (iii) The hourly average of... test runs. (1) The uncontrolled residual organic HAP content in the latex or dry crumb rubber, as...

  10. 40 CFR 63.498 - Back-end process provisions-recordkeeping.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... be the crumb rubber dry weight of the rubber leaving the stripper. (iv) The organic HAP content of... stripper. (B) For solution processes, this quantity shall be the crumb rubber dry weight of the crumb rubber leaving the stripper. (iii) The hourly average of all stripper parameter results; (iv) If one or...

  11. 40 CFR 63.498 - Back-end process provisions-recordkeeping.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... be the crumb rubber dry weight of the rubber leaving the stripper. (iv) The organic HAP content of... be the crumb rubber dry weight of the crumb rubber leaving the stripper. (iii) The hourly average of... test runs. (1) The uncontrolled residual organic HAP content in the latex or dry crumb rubber, as...

  12. 40 CFR 63.498 - Back-end process provisions-recordkeeping.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... be the crumb rubber dry weight of the rubber leaving the stripper. (iv) The organic HAP content of... be the crumb rubber dry weight of the crumb rubber leaving the stripper. (iii) The hourly average of... test runs. (1) The uncontrolled residual organic HAP content in the latex or dry crumb rubber, as...

  13. 40 CFR 63.498 - Back-end process provisions-recordkeeping.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... be the crumb rubber dry weight of the rubber leaving the stripper. (iv) The organic HAP content of... be the crumb rubber dry weight of the crumb rubber leaving the stripper. (iii) The hourly average of... test runs. (1) The uncontrolled residual organic HAP content in the latex or dry crumb rubber, as...

  14. A novel web-enabled healthcare solution on health vault system.

    PubMed

    Liao, Lingxia; Chen, Min; Rodrigues, Joel J P C; Lai, Xiaorong; Vuong, Son

    2012-06-01

    Complicated Electronic Medical Records (EMR) systems have created problems in systems regarding an easy implementation and interoperability for a Web-enabled Healthcare Solution, which is normally provided by an independent healthcare giver with limited IT knowledge and interests. An EMR system with well-designed and user-friendly interface, such as Microsoft HealthVault System used as the back-end platform of a Web-enabled healthcare application will be an approach to deal with these problems. This paper analyzes the patient oriented Web-enabled healthcare service application as the new trend to delivery healthcare from hospital/clinic-centric to patient-centric, the current e-healthcare applications, and the main backend EMR systems. Then, we present a novel web-enabled healthcare solution based on Microsoft HealthVault EMR system to meet customers' needs, such as, low total cost, easily development and maintenance, and good interoperability. A sample system is given to show how the solution can be fulfilled, evaluated, and validated. We expect that this paper will provide a deep understanding of the available EMR systems, leading to insights for new solutions and approaches driven to next generation EMR systems.

  15. Supply of enriched uranium for research reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, H.

    1997-08-01

    Since the RERTR-meeting In Newport/USA in 1990 the author delivered a series of papers in connection with the fuel cycle for research reactors dealing with its front-end. In these papers the author underlined the need for unified specifications for enriched uranium metal suitable for the production of fuel elements and made proposals with regard to the re-use of in Europe reprocessed highly enriched uranium. With regard to the fuel cycle of research reactors the research reactor community was since 1989 more concentrating on the problems of its back-end since the USA stopped the acceptance of spent research reactor fuel onmore » December 31, 1988. Now, since it is apparent that these back-end problem have been solved by AEA`s ability to reprocess and the preparedness of the USA to again accept physically spent research reactor fuel the author is focusing with this paper again on the front-end of the fuel cycle on the question whether there is at all a safe supply of low and high enriched uranium for research reactors in the future.« less

  16. VO-KOREL: A Fourier Disentangling Service of the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Škoda, Petr; Hadrava, Petr; Fuchs, Jan

    2012-04-01

    VO-KOREL is a web service exploiting the technology of the Virtual Observatory for providing astronomers with the intuitive graphical front-end and distributed computing back-end running the most recent version of the Fourier disentangling code KOREL. The system integrates the ideas of the e-shop basket, conserving the privacy of every user by transfer encryption and access authentication, with features of laboratory notebook, allowing the easy housekeeping of both input parameters and final results, as well as it explores a newly emerging technology of cloud computing. While the web-based front-end allows the user to submit data and parameter files, edit parameters, manage a job list, resubmit or cancel running jobs and mainly watching the text and graphical results of a disentangling process, the main part of the back-end is a simple job queue submission system executing in parallel multiple instances of the FORTRAN code KOREL. This may be easily extended for GRID-based deployment on massively parallel computing clusters. The short introduction into underlying technologies is given, briefly mentioning advantages as well as bottlenecks of the design used.

  17. Oasis: A high-level/high-performance open source Navier-Stokes solver

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Valen-Sendstad, Kristian

    2015-03-01

    Oasis is a high-level/high-performance finite element Navier-Stokes solver written from scratch in Python using building blocks from the FEniCS project (fenicsproject.org). The solver is unstructured and targets large-scale applications in complex geometries on massively parallel clusters. Oasis utilizes MPI and interfaces, through FEniCS, to the linear algebra backend PETSc. Oasis advocates a high-level, programmable user interface through the creation of highly flexible Python modules for new problems. Through the high-level Python interface the user is placed in complete control of every aspect of the solver. A version of the solver, that is using piecewise linear elements for both velocity and pressure, is shown to reproduce very well the classical, spectral, turbulent channel simulations of Moser et al. (1999). The computational speed is strongly dominated by the iterative solvers provided by the linear algebra backend, which is arguably the best performance any similar implicit solver using PETSc may hope for. Higher order accuracy is also demonstrated and new solvers may be easily added within the same framework.

  18. Visual EKF-SLAM from Heterogeneous Landmarks †

    PubMed Central

    Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.

    2016-01-01

    Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602

  19. The backend design of an environmental monitoring system upon real-time prediction of groundwater level fluctuation under the hillslope.

    PubMed

    Lin, Hsueh-Chun; Hong, Yao-Ming; Kan, Yao-Chiang

    2012-01-01

    The groundwater level represents a critical factor to evaluate hillside landslides. A monitoring system upon the real-time prediction platform with online analytical functions is important to forecast the groundwater level due to instantaneously monitored data when the heavy precipitation raises the groundwater level under the hillslope and causes instability. This study is to design the backend of an environmental monitoring system with efficient algorithms for machine learning and knowledge bank for the groundwater level fluctuation prediction. A Web-based platform upon the model-view controller-based architecture is established with technology of Web services and engineering data warehouse to support online analytical process and feedback risk assessment parameters for real-time prediction. The proposed system incorporates models of hydrological computation, machine learning, Web services, and online prediction to satisfy varieties of risk assessment requirements and approaches of hazard prevention. The rainfall data monitored from the potential landslide area at Lu-Shan, Nantou and Li-Shan, Taichung, in Taiwan, are applied to examine the system design.

  20. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  1. CyNEST: a maintainable Cython-based interface for the NEST simulator

    PubMed Central

    Zaytsev, Yury V.; Morrison, Abigail

    2014-01-01

    NEST is a simulator for large-scale networks of spiking point neuron models (Gewaltig and Diesmann, 2007). Originally, simulations were controlled via the Simulation Language Interpreter (SLI), a built-in scripting facility implementing a language derived from PostScript (Adobe Systems, Inc., 1999). The introduction of PyNEST (Eppler et al., 2008), the Python interface for NEST, enabled users to control simulations using Python. As the majority of NEST users found PyNEST easier to use and to combine with other applications, it immediately displaced SLI as the default NEST interface. However, developing and maintaining PyNEST has become increasingly difficult over time. This is partly because adding new features requires writing low-level C++ code intermixed with calls to the Python/C API, which is unrewarding. Moreover, the Python/C API evolves with each new version of Python, which results in a proliferation of version-dependent code branches. In this contribution we present the re-implementation of PyNEST in the Cython language, a superset of Python that additionally supports the declaration of C/C++ types for variables and class attributes, and provides a convenient foreign function interface (FFI) for invoking C/C++ routines (Behnel et al., 2011). Code generation via Cython allows the production of smaller and more maintainable bindings, including increased compatibility with all supported Python releases without additional burden for NEST developers. Furthermore, this novel approach opens up the possibility to support alternative implementations of the Python language at no cost given a functional Cython back-end for the corresponding implementation, and also enables cross-compilation of Python bindings for embedded systems and supercomputers alike. PMID:24672470

  2. Sensor metadata blueprints and computer-aided editing for disciplined SensorML

    NASA Astrophysics Data System (ADS)

    Tagliolato, Paolo; Oggioni, Alessandro; Fugazza, Cristiano; Pepe, Monica; Carrara, Paola

    2016-04-01

    The need for continuous, accurate, and comprehensive environmental knowledge has led to an increase in sensor observation systems and networks. The Sensor Web Enablement (SWE) initiative has been promoted by the Open Geospatial Consortium (OGC) to foster interoperability among sensor systems. The provision of metadata according to the prescribed SensorML schema is a key component for achieving this and nevertheless availability of correct and exhaustive metadata cannot be taken for granted. On the one hand, it is awkward for users to provide sensor metadata because of the lack in user-oriented, dedicated tools. On the other, the specification of invariant information for a given sensor category or model (e.g., observed properties and units of measurement, manufacturer information, etc.), can be labor- and timeconsuming. Moreover, the provision of these details is error prone and subjective, i.e., may differ greatly across distinct descriptions for the same system. We provide a user-friendly, template-driven metadata authoring tool composed of a backend web service and an HTML5/javascript client. This results in a form-based user interface that conceals the high complexity of the underlying format. This tool also allows for plugging in external data sources providing authoritative definitions for the aforementioned invariant information. Leveraging these functionalities, we compiled a set of SensorML profiles, that is, sensor metadata blueprints allowing end users to focus only on the metadata items that are related to their specific deployment. The natural extension of this scenario is the involvement of end users and sensor manufacturers in the crowd-sourced evolution of this collection of prototypes. We describe the components and workflow of our framework for computer-aided management of sensor metadata.

  3. An Overview of Starfish: A Table-Centric Tool for Interactive Synthesis

    NASA Technical Reports Server (NTRS)

    Tsow, Alex

    2008-01-01

    Engineering is an interactive process that requires intelligent interaction at many levels. My thesis [1] advances an engineering discipline for high-level synthesis and architectural decomposition that integrates perspicuous representation, designer interaction, and mathematical rigor. Starfish, the software prototype for the design method, implements a table-centric transformation system for reorganizing control-dominated system expressions into high-level architectures. Based on the digital design derivation (DDD) system a designer-guided synthesis technique that applies correctness preserving transformations to synchronous data flow specifications expressed as co- recursive stream equations Starfish enhances user interaction and extends the reachable design space by incorporating four innovations: behavior tables, serialization tables, data refinement, and operator retiming. Behavior tables express systems of co-recursive stream equations as a table of guarded signal updates. Developers and users of the DDD system used manually constructed behavior tables to help them decide which transformations to apply and how to specify them. These design exercises produced several formally constructed hardware implementations: the FM9001 microprocessor, an SECD machine for evaluating LISP, and the SchemEngine, garbage collected machine for interpreting a byte-code representation of compiled Scheme programs. Bose and Tuna, two of DDD s developers, have subsequently commercialized the design derivation methodology at Derivation Systems, Inc. (DSI). DSI has formally derived and validated PCI bus interfaces and a Java byte-code processor; they further executed a contract to prototype SPIDER-NASA's ultra-reliable communications bus. To date, most derivations from DDD and DRS have targeted hardware due to its synchronous design paradigm. However, Starfish expressions are independent of the synchronization mechanism; there is no commitment to hardware or globally broadcast clocks. Though software back-ends for design derivation are limited to the DDD stream-interpreter, targeting synchronous or real-time software is not substantively different from targeting hardware.

  4. An Integrated Research Program for the Modeling, Analysis and Control of Aerospace Systems

    DTIC Science & Technology

    1992-03-03

    Fabiano, Jr. - Brown University Mitchell Feigenbaum - Rockefeller University Elena Fernandez - Institudo de Desarrollo Techologico, para la Industria...system. The system runs under DEC Ultrix; we have installed the GKS graphics system and language compilers (FORTRAN and C). The DELIGHT.MIMO software ...which links a sophisticated non-smooth optimization package to some linear system software , is on the system. The package was kindly furnished by

  5. An Integrated Research Program for the Modeling, Analysis and Control of Aerospace Systems

    DTIC Science & Technology

    1992-03-03

    Mitchell Feigenbaum - Rockefeller University Elena Fernandez - Institudo de Desarrollo Techologico, para la Industria Quimica Wilfred M. Greenlee...Ultrix; we have installed the GKS graphics system and language compilers (FORTRAN and C). The DELIGHT.MIMO software , which links a sophisticated non...smooth optimization package to some linear system software , is on the system. The package was kindly furnished by Professor E. Polak, Electrical and

  6. Near Hartree-Fock quality GTO basis sets for the first- and third-row atoms

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1989-01-01

    Energy-optimized Gaussian-type-orbital (GTO) basis sets of accuracy approaching that of numerical Hartree-Fock computations are compiled for the elements of the first and third rows of the periodic table. The methods employed in calculating the sets are explained; the applicability of the sets to electronic-structure calculations is discussed; and the results are presented in tables and briefly characterized.

  7. User Centric Job Monitoring - a redesign and novel approach in the STAR experiment

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Lauret, J.; Zulkarneeva, Y.

    2014-06-01

    User Centric Monitoring (or UCM) has been a long awaited feature in STAR, whereas programs, workflows and system "events" could be logged, broadcast and later analyzed. UCM allows to collect and filter available job monitoring information from various resources and present it to users in a user-centric view rather than an administrative-centric point of view. The first attempt and implementation of "a" UCM approach was made in STAR 2004 using a log4cxx plug-in back-end and then further evolved with an attempt to push toward a scalable database back-end (2006) and finally using a Web-Service approach (2010, CSW4DB SBIR). The latest showed to be incomplete and not addressing the evolving needs of the experiment where streamlined messages for online (data acquisition) purposes as well as the continuous support for the data mining needs and event analysis need to coexists and unified in a seamless approach. The code also revealed to be hardly maintainable. This paper presents the next evolutionary step of the UCM toolkit, a redesign and redirection of our latest attempt acknowledging and integrating recent technologies and a simpler, maintainable and yet scalable manner. The extended version of the job logging package is built upon three-tier approach based on Task, Job and Event, and features a Web-Service based logging API, a responsive AJAX-powered user interface, and a database back-end relying on MongoDB, which is uniquely suited for STAR needs. In addition, we present details of integration of this logging package with the STAR offline and online software frameworks. Leveraging on the reported experience and work from the ATLAS and CMS experience on using the ESPER engine, we discuss and show how such approach has been implemented in STAR for meta-data event triggering stream processing and filtering. An ESPER based solution seems to fit well into the online data acquisition system where many systems are monitored.

  8. coNCePTual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pakin, Scott

    2004-05-13

    A frequently reinvented wheel among network researchers is a suite of programs that test a network’s performance. A problem with having umpteen versions of performance tests is that it leads to a variety in the way results are reported; colloquially, apples are often compared to oranges. Consider a bandwidth test. Does a bandwidth test run for a fixed number of iterations or a fixed length of time? Is bandwidth measured as ping-pong bandwidth (i.e., 2 * message length / round-trip time) or unidirectional throughput (N messages in one direction followed by a single acknowledgement message)? Is the acknowledgement message ofmore » minimal length or as long as the entire message? Does its length contribute to the total bandwidth? Is data sent unidirectionally or in both directions at once? How many warmup messages (if any) are sent before the timing loop? Is there a delay after the warmup messages (to give the network a chance to reclaim any scarce resources)? Are receives nonblocking (possibly allowing overlap in the NIC) or blocking? The motivation behind creating coNCePTuaL, a simple specification language designed for describing network benchmarks, is that it enables a benchmark to be described sufficiently tersely as to fit easily in a report or research paper, facilitating peer review of the experimental setup and timing measurements. Because coNCePTuaL code is simple to write, network tests can be developed and deployed with low turnaround times -- useful when the results of one test suggest a following test that should be written. Because coNCePTuaL is special-purpose its run-time system can perform the following functions, which benchmark writers often neglect to implement: * logging information about the environment under which the benchmark ran: operating system, CPU architecture and clock speed, timer type and resolution, etc. * aborting a program if it takes longer than a predetermined length of time to complete * writing measurement data and descriptive statistics to a variety of output formats, including the input formats of various graph-plotting programs coNCePTuaL is not limited to network peformance tests, however. It can also be used for network verification. That is, coNCePTuaL programs can be used to locate failed links or to determine the frequency of bit errors --even those that may sneak past the networks CRC hardware. In addition, because coNCePTuaL is a very high-level language, the coNCePTuaL compiler’s backend has a great deal of potential. It would be possible for the backend to produce a variety of target formats such as Fortran + MPI, Perl + sockets, C + a network vendor’s low-level messaging layer, and so forth. It could directly manipulate a network simulator. It could feed into a graphics program to produce a space-time diagram of a coNCePTuaL program. The possibilities are endless.« less

  9. Insertion of operation-and-indicate instructions for optimized SIMD code

    DOEpatents

    Eichenberger, Alexander E; Gara, Alan; Gschwind, Michael K

    2013-06-04

    Mechanisms are provided for inserting indicated instructions for tracking and indicating exceptions in the execution of vectorized code. A portion of first code is received for compilation. The portion of first code is analyzed to identify non-speculative instructions performing designated non-speculative operations in the first code that are candidates for replacement by replacement operation-and-indicate instructions that perform the designated non-speculative operations and further perform an indication operation for indicating any exception conditions corresponding to special exception values present in vector register inputs to the replacement operation-and-indicate instructions. The replacement is performed and second code is generated based on the replacement of the at least one non-speculative instruction. The data processing system executing the compiled code is configured to store special exception values in vector output registers, in response to a speculative instruction generating an exception condition, without initiating exception handling.

  10. Barrier-breaking performance for industrial problems on the CRAY C916

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graffunder, S.K.

    1993-12-31

    Nine applications, including third-party codes, were submitted to the Gordon Bell Prize committee showing the CRAY C916 supercomputer providing record-breaking time to solution for industrial problems in several disciplines. Performance was obtained by balancing raw hardware speed; effective use of large, real, shared memory; compiler vectorization and autotasking; hand optimization; asynchronous I/O techniques; and new algorithms. The highest GFLOPS performance for the submissions was 11.1 GFLOPS out of a peak advertised performance of 16 GFLOPS for the CRAY C916 system. One program achieved a 15.45 speedup from the compiler with just two hand-inserted directives to scope variables properly for themore » mathematical library. New I/O techniques hide tens of gigabytes of I/O behind parallel computations. Finally, new iterative solver algorithms have demonstrated times to solution on 1 CPU as high as 70 times faster than the best direct solvers.« less

  11. A programmable optimization environment using the GAMESS-US and MERLIN/MCL packages. Applications on intermolecular interaction energies

    NASA Astrophysics Data System (ADS)

    Kalatzis, Fanis G.; Papageorgiou, Dimitrios G.; Demetropoulos, Ioannis N.

    2006-09-01

    The Merlin/MCL optimization environment and the GAMESS-US package were combined so as to offer an extended and efficient quantum chemistry optimization system, capable of implementing complex optimization strategies for generic molecular modeling problems. A communication and data exchange interface was established between the two packages exploiting all Merlin features such as multiple optimizers, box constraints, user extensions and a high level programming language. An important feature of the interface is its ability to perform dimer computations by eliminating the basis set superposition error using the counterpoise (CP) method of Boys and Bernardi. Furthermore it offers CP-corrected geometry optimizations using analytic derivatives. The unified optimization environment was applied to construct portions of the intermolecular potential energy surface of the weakly bound H-bonded complex C 6H 6-H 2O by utilizing the high level Merlin Control Language. The H-bonded dimer HF-H 2O was also studied by CP-corrected geometry optimization. The ab initio electronic structure energies were calculated using the 6-31G ** basis set at the Restricted Hartree-Fock and second-order Moller-Plesset levels, while all geometry optimizations were carried out using a quasi-Newton algorithm provided by Merlin. Program summaryTitle of program: MERGAM Catalogue identifier:ADYB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYB_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: The program is designed for machines running the UNIX operating system. It has been tested on the following architectures: IA32 (Linux with gcc/g77 v.3.2.3), AMD64 (Linux with the Portland group compilers v.6.0), SUN64 (SunOS 5.8 with the Sun Workshop compilers v.5.2) and SGI64 (IRIX 6.5 with the MIPSpro compilers v.7.4) Installations: University of Ioannina, Greece Operating systems or monitors under which the program has been tested: UNIX Programming language used: ANSI C, ANSI Fortran-77 No. of lines in distributed program, including test data, etc.:11 282 No. of bytes in distributed program, including test data, etc.: 49 458 Distribution format: tar.gz Memory required to execute with typical data: Memory requirements mainly depend on the selection of a GAMESS-US basis set and the number of atoms No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: no Nature of physical problem: Multidimensional geometry optimization is of great importance in any ab initio calculation since it usually is one of the most CPU-intensive tasks, especially on large molecular systems. For example, the geometric and energetic description of van der Waals and weakly bound H-bonded complexes requires the construction of related important portions of the multidimensional intermolecular potential energy surface (IPES). So the various held views about the nature of these bonds can be quantitatively tested. Method of solution: The Merlin/MCL optimization environment was interconnected with the GAMESS-US package to facilitate geometry optimization in quantum chemistry problems. The important portions of the IPES require the capability to program optimization strategies. The Merlin/MCL environment was used for the implementation of such strategies. In this work, a CP-corrected geometry optimization was performed on the HF-H 2O complex and an MCL program was developed to study portions of the potential energy surface of the C 6H 6-H 2O complex. Restrictions on the complexity of the problem: The Merlin optimization environment and the GAMESS-US package must be installed. The MERGAM interface requires GAMESS-US input files that have been constructed in Cartesian coordinates. This restriction occurs from a design-time requirement to not allow reorientation of atomic coordinates; this rule holds always true when applying the COORD = UNIQUE keyword in a GAMESS-US input file. Typical running time: It depends on the size of the molecular system, the size of the basis set and the method of electron correlation. Execution of the test run took approximately 5 min on a 2.8 GHz Intel Pentium CPU.

  12. Scientific & Intelligence Exascale Visualization Analysis System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Money, James H.

    SIEVAS provides an immersive visualization framework for connecting multiple systems in real time for data science. SIEVAS provides the ability to connect multiple COTS and GOTS products in a seamless fashion for data fusion, data analysis, and viewing. It provides this capability by using a combination of micro services, real time messaging, and web service compliant back-end system.

  13. Provenance Store Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Gibson, Tara D.; Schuchardt, Karen L.

    2008-03-01

    Requirements for the provenance store and access API are developed. Existing RDF stores and APIs are evaluated against the requirements and performance benchmarks. The team’s conclusion is to use MySQL as a database backend, with a possible move to Oracle in the near-term future. Both Jena and Sesame’s APIs will be supported, but new code will use the Jena API

  14. IVS Technology Coordinator Report

    NASA Technical Reports Server (NTRS)

    Whitney, Alan

    2013-01-01

    This report of the Technology Coordinator includes the following: 1) continued work to implement the new VLBI2010 system, 2) the 1st International VLBI Technology Workshop, 3) a VLBI Digital- Backend Intercomparison Workshop, 4) DiFX software correlator development for geodetic VLBI, 5) a review of progress towards global VLBI standards, and 6) a welcome to new IVS Technology Coordinator Bill Petrachenko.

  15. The battle between Unix and Windows NT.

    PubMed

    Anderson, H J

    1997-02-01

    For more than a decade, Unix has been the dominant back-end operating system in health care. But that prominent position is being challenged by Windows NT, touted by its developer, Microsoft Corp., as the operating system of the future. CIOs and others are attempting to figure out which system is the best choice in the long run.

  16. 40 CFR 63.500 - Back-end process provisions-carbon disulfide limitations for styrene butadiene rubber by emulsion...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accepted chemical engineering principles, measurable process parameters, or physical or chemical laws or... engineering assessment, as described in paragraph (c)(2) of this section. (1) The owner or operator may choose... sample run. (2) The owner or operator may use engineering assessment to demonstrate compliance with the...

  17. Front-End and Back-End Database Design and Development: Scholar's Academy Case Study

    ERIC Educational Resources Information Center

    Parks, Rachida F.; Hall, Chelsea A.

    2016-01-01

    This case study consists of a real database project for a charter school--Scholar's Academy--and provides background information on the school and its cafeteria processing system. Also included are functional requirements and some illustrative data. Students are tasked with the design and development of a database for the purpose of improving the…

  18. Information Tailoring Enhancements for Large Scale Social Data

    DTIC Science & Technology

    2016-03-15

    i.com) 1 Work Performed within This Reporting Period .................................................... 2 1.1 Implemented Temporal Analytics ...following tasks.  Implemented Temporal Analysis Algorithms for Advanced Analytics in Scraawl. We implemented our backend web service design for the...temporal analysis and we created a prototyope GUI web service of Scraawl analytics dashboard.  Upgraded Scraawl computational framework to increase

  19. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices used to comply...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... thermocouple, ultra-violet beam sensor, or infrared sensor) capable of continuously detecting the presence of a..., as appropriate. (1) Where an incinerator is used, a temperature monitoring device equipped with a... temperature monitoring device shall be installed in the firebox or in the ductwork immediately downstream of...

  20. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices used to comply...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... thermocouple, ultra-violet beam sensor, or infrared sensor) capable of continuously detecting the presence of a..., as appropriate. (1) Where an incinerator is used, a temperature monitoring device equipped with a... temperature monitoring device shall be installed in the firebox or in the ductwork immediately downstream of...

  1. 40 CFR 63.497 - Back-end process provisions-monitoring provisions for control and recovery devices used to comply...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... thermocouple, ultra-violet beam sensor, or infrared sensor) capable of continuously detecting the presence of a..., as appropriate. (1) Where an incinerator is used, a temperature monitoring device equipped with a... temperature monitoring device shall be installed in the firebox or in the ductwork immediately downstream of...

  2. Teaching with a Dual-Channel Classroom Feedback System in the Digital Classroom Environment

    ERIC Educational Resources Information Center

    Yu, Yuan-Chih

    2017-01-01

    Teaching with a classroom feedback system can benefit both teaching and learning practices of interactivity. In this paper, we propose a dual-channel classroom feedback system integrated with a back-end e-Learning system. The system consists of learning agents running on the students' computers and a teaching agent running on the instructor's…

  3. Driving Ms. Data: Creating Data-Driven Possibilities

    ERIC Educational Resources Information Center

    Hoffman, Richard

    2005-01-01

    This article describes how driven Web sites help schools and districts maximize their IT resources by making online content more "self-service" for users. It shows how to set up the capacity to create data-driven sites. By definition, a data-driven Web site is one in which the content comes from some back-end data source, such as a…

  4. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  5. Experiences in using the CYBER 203 for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1982-01-01

    In this paper, the authors report on some of their experiences modifying two three-dimensional transonic flow programs (FLO22 and FLO27) for use on the NASA Langley Research Center CYBER 203. Both of the programs discussed were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine, including: (1) leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, (2) vectorizing parts of the existing algorithm in the program, and (3) incorporating a new vectorizable algorithm (ZEBRA I or ZEBRA II) in the program.

  6. Optimization of the Laser Properties of Polymer Films Doped with N,N´-Bis(3-methylphenyl)-N,N´-diphenylbenzidine

    PubMed Central

    Calzado, Eva M.; Boj, Pedro G.; Díaz-García, María A.

    2009-01-01

    This review compiles the work performed in the field of organic solid-state lasers with the hole-transporting organic molecule N,N´-bis(3-methylphenyl)-N,N´-diphenyl-benzidine system (TPD), in view of improving active laser material properties. The optimization of the amplified spontaneous emission characteristics, i.e., threshold, linewidth, emission wavelength and photostability, of polystyrene films doped with TPD in waveguide configuration has been achieved by investigating the influence of several materials parameters such as film thickness and TPD concentration. In addition, the influence in the emission properties of the inclusion of a second-order distributed feedback grating in the substrate is discussed.

  7. Domain Specific Language Support for Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mellor-Crummey, John

    A multi-institutional project known as D-TEC (short for “Domain- specific Technology for Exascale Computing”) set out to explore technologies to support the construction of Domain Specific Languages (DSLs) to map application programs to exascale architectures. DSLs employ automated code transformation to shift the burden of delivering portable performance from application programmers to compilers. Two chief properties contribute: DSLs permit expression at a high level of abstraction so that a programmer’s intent is clear to a compiler and DSL implementations encapsulate human domain-specific optimization knowledge so that a compiler can be smart enough to achieve good results on specific hardware. Domainmore » specificity is what makes these properties possible in a programming language. If leveraging domain specificity is the key to keep exascale software tractable, a corollary is that many different DSLs will be needed to encompass the full range of exascale computing applications; moreover, a single application may well need to use several different DSLs in conjunction. As a result, developing a general toolkit for building domain-specific languages was a key goal for the D-TEC project. Different aspects of the D-TEC research portfolio were the focus of work at each of the partner institutions in the multi-institutional project. D-TEC research and development work at Rice University focused on on three principal topics: understanding how to automate the tuning of code for complex architectures, research and development of the Rosebud DSL engine, and compiler technology to support complex execution platforms. This report provides a summary of the research and development work on the D-TEC project at Rice University.« less

  8. Dynamic Querying of Mass-Storage RDF Data with Rule-Based Entailment Regimes

    NASA Astrophysics Data System (ADS)

    Ianni, Giovambattista; Krennwallner, Thomas; Martello, Alessandra; Polleres, Axel

    RDF Schema (RDFS) as a lightweight ontology language is gaining popularity and, consequently, tools for scalable RDFS inference and querying are needed. SPARQL has become recently a W3C standard for querying RDF data, but it mostly provides means for querying simple RDF graphs only, whereas querying with respect to RDFS or other entailment regimes is left outside the current specification. In this paper, we show that SPARQL faces certain unwanted ramifications when querying ontologies in conjunction with RDF datasets that comprise multiple named graphs, and we provide an extension for SPARQL that remedies these effects. Moreover, since RDFS inference has a close relationship with logic rules, we generalize our approach to select a custom ruleset for specifying inferences to be taken into account in a SPARQL query. We show that our extensions are technically feasible by providing benchmark results for RDFS querying in our prototype system GiaBATA, which uses Datalog coupled with a persistent Relational Database as a back-end for implementing SPARQL with dynamic rule-based inference. By employing different optimization techniques like magic set rewriting our system remains competitive with state-of-the-art RDFS querying systems.

  9. Large Format, Background Limited Arrays of Kinetic Inductance Detectors for Sub-mm Astronomy

    NASA Astrophysics Data System (ADS)

    Baselmans, Jochem

    2018-01-01

    We present the development of large format imaging arrays for sub-mm astronomy based upon microwave Kinetic Inductance detectors and their read-out. In particular we focus on the arrays developed for the A-MKID instrument for the APEX telescope. AMKID contains 2 focal plane arrays, covering a field of view of 15?x15?. One array is optimized for the 350 GHz telluric window, the other for the 850 GHz window. Both arrays are constructed from four 61 x 61 mm detector chips, each of which contains up to 3400 detectors and up to 880 detectors per readout line. The detectors are lens antenna coupled MKIDs made from NbTiN and Aluminium that reach photon noise limited sensitivity in combination with a high optical coupling. The lens-antenna radiation coupling enables the use of 4K optics and Lyot stop due to the intrinsic directivity of the detector beam, allowing a simple cryogenic architecture. We discuss the pixel design and verification, detector packaging and the array performance. We will also discuss the readout system, which is a combination of a digital and analog back-end that can read-out up to 4000 pixels simultaneously using frequency division multiplexing.

  10. Conformal and Spectrally Agile Ultra Wideband Phased Array Antenna for Communication and Sensing

    NASA Technical Reports Server (NTRS)

    Novak, M.; Alwan, Elias; Miranda, Felix; Volakis, John

    2015-01-01

    There is a continuing need for reducing size and weight of satellite systems, and is also strong interest to increase the functional role of small- and nano-satellites (for instance SmallSats and CubeSats). To this end, a family of arrays is presented, demonstrating ultra-wideband operation across the numerous satellite communications and sensing frequencies up to the Ku-, Ka-, and Millimeter-Wave bands. An example design is demonstrated to operate from 3.5-18.5 GHz with VSWR2 at broadside, and validated through fabrication of an 8 x 8 prototype. This design is optimized for low cost, using Printed Circuit Board (PCB) fabrication. With the same fabrication technology, scaling is shown to be feasible up to a 9-49 GHz band. Further designs are discussed, which extend this wideband operation beyond the Ka-band, for instance from 20-80 GHz. Finally we will discuss recent efforts in the direct integration of such arrays with digital beamforming back-ends. It will be shown that using a novel on-site coding architecture, orders of magnitude reduction in hardware size, power, and cost is accomplished in this transceiver.

  11. RRI-GBT MULTI-BAND RECEIVER: MOTIVATION, DESIGN, AND DEVELOPMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maan, Yogesh; Deshpande, Avinash A.; Chandrashekar, Vinutha

    2013-01-15

    We report the design and development of a self-contained multi-band receiver (MBR) system, intended for use with a single large aperture to facilitate sensitive and high time-resolution observations simultaneously in 10 discrete frequency bands sampling a wide spectral span (100-1500 MHz) in a nearly log-periodic fashion. The development of this system was primarily motivated by need for tomographic studies of pulsar polar emission regions. Although the system design is optimized for the primary goal, it is also suited for several other interesting astronomical investigations. The system consists of a dual-polarization multi-band feed (with discrete responses corresponding to the 10 bandsmore » pre-selected as relatively radio frequency interference free), a common wide-band radio frequency front-end, and independent back-end receiver chains for the 10 individual sub-bands. The raw voltage time sequences corresponding to 16 MHz bandwidth each for the two linear polarization channels and the 10 bands are recorded at the Nyquist rate simultaneously. We present the preliminary results from the tests and pulsar observations carried out with the Robert C. Byrd Green Bank Telescope using this receiver. The system performance implied by these results and possible improvements are also briefly discussed.« less

  12. Quality standards for predialysis education: results from a consensus conference.

    PubMed

    Isnard Bagnis, Corinne; Crepaldi, Carlo; Dean, Jessica; Goovaerts, Tony; Melander, Stefan; Nilsson, Eva-Lena; Prieto-Velasco, Mario; Trujillo, Carmen; Zambon, Roberto; Mooney, Andrew

    2015-07-01

    This position statement was compiled following an expert meeting in March 2013, Zurich, Switzerland. Attendees were invited from a spread of European renal units with established and respected renal replacement therapy option education programmes. Discussions centred around optimal ways of creating an education team, setting realistic and meaningful objectives for patient education, and assessing the quality of education delivered. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA.

  13. Toward Abstracting the Communication Intent in Applications to Improve Portability and Productivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mintz, Tiffany M; Hernandez, Oscar R; Kartsaklis, Christos

    Programming with communication libraries such as the Message Passing Interface (MPI) obscures the high-level intent of the communication in an application and makes static communication analysis difficult to do. Compilers are unaware of communication libraries specifics, leading to the exclusion of communication patterns from any automated analysis and optimizations. To overcome this, communication patterns can be expressed at higher-levels of abstraction and incrementally added to existing MPI applications. In this paper, we propose the use of directives to clearly express the communication intent of an application in a way that is not specific to a given communication library. Our communicationmore » directives allow programmers to express communication among processes in a portable way, giving hints to the compiler on regions of computations that can be overlapped with communication and relaxing communication constraints on the ordering, completion and synchronization of the communication imposed by specific libraries such as MPI. The directives can then be translated by the compiler into message passing calls that efficiently implement the intended pattern and be targeted to multiple communication libraries. Thus far, we have used the directives to express point-to-point communication patterns in C, C++ and Fortran applications, and have translated them to MPI and SHMEM.« less

  14. ROSE Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinlan, D.; Yi, Q.; Buduc, R.

    2005-02-17

    ROSE is an object-oriented software infrastructure for source-to-source translation that provides an interface for programmers to write their own specialized translators for optimizing scientific applications. ROSE is a part of current research on telescoping languages, which provides optimizations of the use of libraries in scientific applications. ROSE defines approaches to extend the optimization techniques, common in well defined languages, to the optimization of scientific applications using well defined libraries. ROSE includes a rich set of tools for generating customized transformations to support optimization of applications codes. We currently support full C and C++ (including template instantiation etc.), with Fortran 90more » support under development as part of a collaboration and contract with Rice to use their version of the open source Open64 F90 front-end. ROSE represents an attempt to define an open compiler infrastructure to handle the full complexity of full scale DOE applications codes using the languages common to scientific computing within DOE. We expect that such an infrastructure will also be useful for the development of numerous tools that may then realistically expect to work on DOE full scale applications.« less

  15. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  16. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  17. DFT Performance Prediction in FFTW

    NASA Astrophysics Data System (ADS)

    Gu, Liang; Li, Xiaoming

    Fastest Fourier Transform in the West (FFTW) is an adaptive FFT library that generates highly efficient Discrete Fourier Transform (DFT) implementations. It is one of the fastest FFT libraries available and it outperforms many adaptive or hand-tuned DFT libraries. Its success largely relies on the huge search space spanned by several FFT algorithms and a set of compiler generated C code (called codelets) for small size DFTs. FFTW empirically finds the best algorithm by measuring the performance of different algorithm combinations. Although the empirical search works very well for FFTW, the search process does not explain why the best plan found performs best, and the search overhead grows polynomially as the DFT size increases. The opposite of empirical search is model-driven optimization. However, it is widely believed that model-driven optimization is inferior to empirical search and is particularly powerless to solve problems as complex as the optimization of DFT.

  18. From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

    DOE PAGES

    Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...

    2013-01-01

    Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less

  19. Declarative language design for interactive visualization.

    PubMed

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  20. State University of New York Institute of Technology (SUNYIT) Visiting Scholars Program

    DTIC Science & Technology

    2013-05-01

    team members, and build the necessary backend metal interconnections. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 4 Baek-Young Choi...Cooperative and Opportunistic Mobile Cloud for Energy Efficient Positioning; Department of Computer Science Electrical Engineering, University of...Missouri - Kansas City The fast growing popularity of smartphones and tablets enables us the use of various intelligent mobile applications. As many of

  1. Marine Corps Budgetary Reprogramming Effectiveness

    DTIC Science & Technology

    2015-03-01

    infrastructure (Appropriations Act of Congress, 2008). The environmental restoration is a transfer account controlled by the DOD. Usually in the case of...at an average just over 11 percent and the Marine Corps encircle the backend of the DOD portion of reprogramming with the Marine Corps reprogramming...blue force tracker (BFT), radio systems, high mobility multipurpose wheeled vehicle (HMMWV), medium tactical vehicle replacement (MTVR), and

  2. Real-time Scheduling for GPUS with Applications in Advanced Automotive Systems

    DTIC Science & Technology

    2015-01-01

    129 3.7 Architecture of GPU tasklet scheduling infrastructure ...throughput. This disparity is even greater when we consider mobile CPUs, such as those designed by ARM. For instance, the ARM Cortex-A15 series processor as...stub library that replaces the GPGPU runtime within each virtual machine. The stub library communicates API calls to a GPGPU backend user-space daemon

  3. Information Dynamics as Foundation for Network Management

    DTIC Science & Technology

    2014-12-04

    developed to adapt to channel dynamics in a mobile network environment. We devise a low- complexity online scheduling algorithm integrated with the...has been accepted for the Journal on Network and Systems Management in 2014. - RINC programmable platform for Infrastructure -as-a-Service public... backend servers. Rather than implementing load balancing in dedicated appliances, commodity SDN switches can perform this function. We design

  4. 40 CFR 63.495 - Back-end process provisions-procedures to determine compliance using stripping technology.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... is to be determined using the Methods specified in paragraph (e) of this section. (4) The quantity of... methods of determining this quantity are production records, measurement of stream characteristics, and... paragraph (d)(1)(i), (d)(1)(ii), or (d)(1)(iii) of this section. (i) When the latex is not blended with...

  5. 40 CFR 63.494 - Back-end process provisions-residual organic HAP and emission limitations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... be measured after the stripping operation (or the reactor(s), if the plant has no stripper(s)), as... operation (or the reactor(s), if the plant has no stripper(s)). The limitation shall be calculated and... = Controlled emissions in 2009, Mg/yr P2009 = Total elastomer product leaving the stripper in 2009, Mg/yr...

  6. 40 CFR 63.494 - Back-end process provisions-residual organic HAP and emission limitations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... be measured after the stripping operation (or the reactor(s), if the plant has no stripper(s)), as... operation (or the reactor(s), if the plant has no stripper(s)). The limitation shall be calculated and... = Controlled emissions in 2009, Mg/yr P2009 = Total elastomer product leaving the stripper in 2009, Mg/yr...

  7. Readout, first- and second-level triggers of the new Belle silicon vertex detector

    NASA Astrophysics Data System (ADS)

    Friedl, M.; Abe, R.; Abe, T.; Aihara, H.; Asano, Y.; Aso, T.; Bakich, A.; Browder, T.; Chang, M. C.; Chao, Y.; Chen, K. F.; Chidzik, S.; Dalseno, J.; Dowd, R.; Dragic, J.; Everton, C. W.; Fernholz, R.; Fujii, H.; Gao, Z. W.; Gordon, A.; Guo, Y. N.; Haba, J.; Hara, K.; Hara, T.; Harada, Y.; Haruyama, T.; Hasuko, K.; Hayashi, K.; Hazumi, M.; Heenan, E. M.; Higuchi, T.; Hirai, H.; Hitomi, N.; Igarashi, A.; Igarashi, Y.; Ikeda, H.; Ishino, H.; Itoh, K.; Iwaida, S.; Kaneko, J.; Kapusta, P.; Karawatzki, R.; Kasami, K.; Kawai, H.; Kawasaki, T.; Kibayashi, A.; Koike, S.; Korpar, S.; Križan, P.; Kurashiro, H.; Kusaka, A.; Lesiak, T.; Limosani, A.; Lin, W. C.; Marlow, D.; Matsumoto, H.; Mikami, Y.; Miyake, H.; Moloney, G. R.; Mori, T.; Nakadaira, T.; Nakano, Y.; Natkaniec, Z.; Nozaki, S.; Ohkubo, R.; Ohno, F.; Okuno, S.; Onuki, Y.; Ostrowicz, W.; Ozaki, H.; Peak, L.; Pernicka, M.; Rosen, M.; Rozanska, M.; Sato, N.; Schmid, S.; Shibata, T.; Stamen, R.; Stanič, S.; Steininger, H.; Sumisawa, K.; Suzuki, J.; Tajima, H.; Tajima, O.; Takahashi, K.; Takasaki, F.; Tamura, N.; Tanaka, M.; Taylor, G. N.; Terazaki, H.; Tomura, T.; Trabelsi, K.; Trischuk, W.; Tsuboyama, T.; Uchida, K.; Ueno, K.; Ueno, K.; Uozaki, N.; Ushiroda, Y.; Vahsen, S.; Varner, G.; Varvell, K.; Velikzhanin, Y. S.; Wang, C. C.; Wang, M. Z.; Watanabe, M.; Watanabe, Y.; Yamada, Y.; Yamamoto, H.; Yamashita, Y.; Yamashita, Y.; Yamauchi, M.; Yanai, H.; Yang, R.; Yasu, Y.; Yokoyama, M.; Ziegler, T.; Žontar, D.

    2004-12-01

    A major upgrade of the Silicon Vertex Detector (SVD 2.0) of the Belle experiment at the KEKB factory was installed along with new front-end and back-end electronics systems during the summer shutdown period in 2003 to cope with higher particle rates, improve the track resolution and meet the increasing requirements of radiation tolerance. The SVD 2.0 detector modules are read out by VA1TA chips which provide "fast or" (hit) signals that are combined by the back-end FADCTF modules to coarse, but immediate level 0 track trigger signals at rates of several tens of a kHz. Moreover, the digitized detector signals are compared to threshold lookup tables in the FADCTFs to pass on hit information on a single strip basis to the subsequent level 1.5 trigger system, which reduces the rate below the kHz range. Both FADCTF and level 1.5 electronics make use of parallel real-time processing in Field Programmable Gate Arrays (FPGAs), while further data acquisition and event building is done by PC farms running Linux. The new readout system hardware is described and the first results obtained with cosmics are shown.

  8. TRIO: Burst Buffer Based I/O Orchestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Oral, H Sarp; Pritchard, Michael

    The growing computing power on leadership HPC systems is often accompanied by ever-escalating failure rates. Checkpointing is a common defensive mechanism used by scientific applications for failure recovery. However, directly writing the large and bursty checkpointing dataset to parallel filesystem can incur significant I/O contention on storage servers. Such contention in turn degrades the raw bandwidth utilization of storage servers and prolongs the average job I/O time of concurrent applications. Recently burst buffer has been proposed as an intermediate layer to absorb the bursty I/O traffic from compute nodes to storage backend. But an I/O orchestration mechanism is still desiredmore » to efficiently move checkpointing data from bursty buffers to storage backend. In this paper, we propose a burst buffer based I/O orchestration framework, named TRIO, to intercept and reshape the bursty writes for better sequential write traffic to storage severs. Meanwhile, TRIO coordinates the flushing orders among concurrent burst buffers to alleviate the contention on storage server bandwidth. Our experimental results reveal that TRIO can deliver 30.5% higher bandwidth and reduce the average job I/O time by 37% on average for data-intensive applications in various checkpointing scenarios.« less

  9. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    NASA Astrophysics Data System (ADS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-12-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  10. Simple Smartphone-Based Guiding System for Visually Impaired People

    PubMed Central

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-01-01

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them. PMID:28608811

  11. Simple Smartphone-Based Guiding System for Visually Impaired People.

    PubMed

    Lin, Bor-Shing; Lee, Cheng-Che; Chiang, Pei-Ying

    2017-06-13

    Visually impaired people are often unaware of dangers in front of them, even in familiar environments. Furthermore, in unfamiliar environments, such people require guidance to reduce the risk of colliding with obstacles. This study proposes a simple smartphone-based guiding system for solving the navigation problems for visually impaired people and achieving obstacle avoidance to enable visually impaired people to travel smoothly from a beginning point to a destination with greater awareness of their surroundings. In this study, a computer image recognition system and smartphone application were integrated to form a simple assisted guiding system. Two operating modes, online mode and offline mode, can be chosen depending on network availability. When the system begins to operate, the smartphone captures the scene in front of the user and sends the captured images to the backend server to be processed. The backend server uses the faster region convolutional neural network algorithm or the you only look once algorithm to recognize multiple obstacles in every image, and it subsequently sends the results back to the smartphone. The results of obstacle recognition in this study reached 60%, which is sufficient for assisting visually impaired people in realizing the types and locations of obstacles around them.

  12. A Secure RFID Tag Authentication Protocol with Privacy Preserving in Telecare Medicine Information System.

    PubMed

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi

    2015-08-01

    Radio Frequency Identification (RFID) based solutions are widely used for providing many healthcare applications include patient monitoring, object traceability, drug administration system and telecare medicine information system (TMIS) etc. In order to reduce malpractices and ensure patient privacy, in 2015, Srivastava et al. proposed a hash based RFID tag authentication protocol in TMIS. Their protocol uses lightweight hash operation and synchronized secret value shared between back-end server and tag, which is more secure and efficient than other related RFID authentication protocols. Unfortunately, in this paper, we demonstrate that Srivastava et al.'s tag authentication protocol has a serious security problem in that an adversary may use the stolen/lost reader to connect to the medical back-end server that store information associated with tagged objects and this privacy damage causing the adversary could reveal medical data obtained from stolen/lost readers in a malicious way. Therefore, we propose a secure and efficient RFID tag authentication protocol to overcome security flaws and improve the system efficiency. Compared with Srivastava et al.'s protocol, the proposed protocol not only inherits the advantages of Srivastava et al.'s authentication protocol for TMIS but also provides better security with high system efficiency.

  13. WPS mediation: An approach to process geospatial data on different computing backends

    NASA Astrophysics Data System (ADS)

    Giuliani, Gregory; Nativi, Stefano; Lehmann, Anthony; Ray, Nicolas

    2012-10-01

    The OGC Web Processing Service (WPS) specification allows generating information by processing distributed geospatial data made available through Spatial Data Infrastructures (SDIs). However, current SDIs have limited analytical capacities and various problems emerge when trying to use them in data and computing-intensive domains such as environmental sciences. These problems are usually not or only partially solvable using single computing resources. Therefore, the Geographic Information (GI) community is trying to benefit from the superior storage and computing capabilities offered by distributed computing (e.g., Grids, Clouds) related methods and technologies. Currently, there is no commonly agreed approach to grid-enable WPS. No implementation allows one to seamlessly execute a geoprocessing calculation following user requirements on different computing backends, ranging from a stand-alone GIS server up to computer clusters and large Grid infrastructures. Considering this issue, this paper presents a proof of concept by mediating different geospatial and Grid software packages, and by proposing an extension of WPS specification through two optional parameters. The applicability of this approach will be demonstrated using a Normalized Difference Vegetation Index (NDVI) mediated WPS process, highlighting benefits, and issues that need to be further investigated to improve performances.

  14. Reconfigurable radio receiver with fractional sample rate converter and multi-rate ADC based on LO-derived sampling clock

    NASA Astrophysics Data System (ADS)

    Park, Sungkyung; Park, Chester Sungchung

    2018-03-01

    A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.

  15. Spectral optimization and uncertainty quantification in combustion modeling

    NASA Astrophysics Data System (ADS)

    Sheen, David Allan

    Reliable simulations of reacting flow systems require a well-characterized, detailed chemical model as a foundation. Accuracy of such a model can be assured, in principle, by a multi-parameter optimization against a set of experimental data. However, the inherent uncertainties in the rate evaluations and experimental data leave a model still characterized by some finite kinetic rate parameter space. Without a careful analysis of how this uncertainty space propagates into the model's predictions, those predictions can at best be trusted only qualitatively. In this work, the Method of Uncertainty Minimization using Polynomial Chaos Expansions is proposed to quantify these uncertainties. In this method, the uncertainty in the rate parameters of the as-compiled model is quantified. Then, the model is subjected to a rigorous multi-parameter optimization, as well as a consistency-screening process. Lastly, the uncertainty of the optimized model is calculated using an inverse spectral optimization technique, and then propagated into a range of simulation conditions. An as-compiled, detailed H2/CO/C1-C4 kinetic model is combined with a set of ethylene combustion data to serve as an example. The idea that the hydrocarbon oxidation model should be understood and developed in a hierarchical fashion has been a major driving force in kinetics research for decades. How this hierarchical strategy works at a quantitative level, however, has never been addressed. In this work, we use ethylene and propane combustion as examples and explore the question of hierarchical model development quantitatively. The Method of Uncertainty Minimization using Polynomial Chaos Expansions is utilized to quantify the amount of information that a particular combustion experiment, and thereby each data set, contributes to the model. This knowledge is applied to explore the relationships among the combustion chemistry of hydrogen/carbon monoxide, ethylene, and larger alkanes. Frequently, new data will become available, and it will be desirable to know the effect that inclusion of these data has on the optimized model. Two cases are considered here. In the first, a study of H2/CO mass burning rates has recently been published, wherein the experimentally-obtained results could not be reconciled with any extant H2/CO oxidation model. It is shown in that an optimized H2/CO model can be developed that will reproduce the results of the new experimental measurements. In addition, the high precision of the new experiments provide a strong constraint on the reaction rate parameters of the chemistry model, manifested in a significant improvement in the precision of simulations. In the second case, species time histories were measured during n-heptane oxidation behind reflected shock waves. The highly precise nature of these measurements is expected to impose critical constraints on chemical kinetic models of hydrocarbon combustion. The results show that while an as-compiled, prior reaction model of n-alkane combustion can be accurate in its prediction of the detailed species profiles, the kinetic parameter uncertainty in the model remains to be too large to obtain a precise prediction of the data. Constraining the prior model against the species time histories within the measurement uncertainties led to notable improvements in the precision of model predictions against the species data as well as the global combustion properties considered. Lastly, we show that while the capability of the multispecies measurement presents a step-change in our precise knowledge of the chemical processes in hydrocarbon combustion, accurate data of global combustion properties are still necessary to predict fuel combustion.

  16. Human factors in incident reporting

    NASA Technical Reports Server (NTRS)

    Jones, S. G.

    1993-01-01

    The paper proposes a cooperative research effort be undertaken by academic institutions and industry organizations toward the compilation of a human factors data base in conjunction with technical information. Team members in any discipline can benefit and learn from observing positive examples of decision making and performance by crews under stressful or less than optimal circumstances. The opportunity to note trends in interpersonal and interactive behaviors and to categorize them is terms of more or less desirable outcomes should not be missed.

  17. [Limited access to the international medical literature in Russia].

    PubMed

    Jargin, Sergei V

    2012-06-01

    Limited access to foreign professional literature in the former Soviet Union had consequences for public health: persistence of some outdated methods and approaches. Several examples are discussed in this letter. The shortage of foreign literature has been partly compensated by domestic editions, sometimes containing compilations from foreign sources, borrowings without references, and mistranslations. International literature is on average scarcely quoted in Russian language scientific publications. Today, however, there are grounds for optimism: the economic upturn must bring improvements.

  18. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.

  19. Cedar-a large scale multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gajski, D.; Kuck, D.; Lawrie, D.

    1983-01-01

    This paper presents an overview of Cedar, a large scale multiprocessor being designed at the University of Illinois. This machine is designed to accommodate several thousand high performance processors which are capable of working together on a single job, or they can be partitioned into groups of processors where each group of one or more processors can work on separate jobs. Various aspects of the machine are described including the control methodology, communication network, optimizing compiler and plans for construction. 13 references.

  20. Co-arrays in the Next Fortran Standard

    DOE PAGES

    Reid, John; Numrich, Robert W.

    2007-01-01

    The WG5 committee, at its meeting in Delft, May 2005, decided to include co-arrays in the next Fortran Standard. A Fortran program containing co-arrays is interpreted as if it were replicated a fixed number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is called an image. The array syntax of Fortran is extended with additional trailing subscripts in square brackets to give a clear and straightforward representation of access to data on other images. References without square brackets are to local data, so code that can run independently is uncluttered.more » Any occurrence of square brackets is a warning about communication between images. The additional syntax requires support in the compiler, but it has been designed to be easy to implement and to give the compiler scope both to apply its optimizations within each image and to optimize the communication between images. The extension includes execution control statements for synchronizing images and intrinsic procedures to return the number of images, to return the index of the current image, and to perform collective operations. The paper does not attempt to describe the full details of the feature as it now appears in the draft of the new standard. Instead, we describe a subset and demonstrate the use of this subset with examples.« less

  1. Hopelessness is associated with decreased heart rate variability during championship chess games.

    PubMed

    Schwarz, Alfons M; Schächinger, Hartmut; Adler, Rolf H; Goetz, Stefan M

    2003-01-01

    Clinical observations suggest that negative affects such as helplessness/hopelessness (HE/HO) may induce autonomic duration; affects were assessed for every move after reconstruction of the games. In all games compiled, 18 situation of intense confidence/optimism and 20 of intense helplessness/hopelessness were observed. Intense affects of HE/HO were associated with decreasing HF-HRV (Fisher exact test, p =.003), increasing "nervousness" (p =.0005), decreasing "optimism" (p =.0005), and decreasing "calmness" (p =.0005). Investigation of championship chess game players with an ELO strength > or = 2300 in a natural field setting revealed increasing HE/HO being associated with reduced HF-HRV suggestive of vagal withdrawal. Thus, our data may help link negative mood states, autonomic nervous system disturbances, and cardiac events.

  2. Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Melson, N. D.; Keller, J. D.

    1983-01-01

    Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.

  3. An Atmospheric General Circulation Model with Chemistry for the CRAY T3E: Design, Performance Optimization and Coupling to an Ocean Model

    NASA Technical Reports Server (NTRS)

    Farrara, John D.; Drummond, Leroy A.; Mechoso, Carlos R.; Spahr, Joseph A.

    1998-01-01

    The design, implementation and performance optimization on the CRAY T3E of an atmospheric general circulation model (AGCM) which includes the transport of, and chemical reactions among, an arbitrary number of constituents is reviewed. The parallel implementation is based on a two-dimensional (longitude and latitude) data domain decomposition. Initial optimization efforts centered on minimizing the impact of substantial static and weakly-dynamic load imbalances among processors through load redistribution schemes. Recent optimization efforts have centered on single-node optimization. Strategies employed include loop unrolling, both manually and through the compiler, the use of an optimized assembler-code library for special function calls, and restructuring of parts of the code to improve data locality. Data exchanges and synchronizations involved in coupling different data-distributed models can account for a significant fraction of the running time. Therefore, the required scattering and gathering of data must be optimized. In systems such as the T3E, there is much more aggregate bandwidth in the total system than in any particular processor. This suggests a distributed design. The design and implementation of a such distributed 'Data Broker' as a means to efficiently couple the components of our climate system model is described.

  4. 40 CFR 63.496 - Back-end process provisions-procedures to determine compliance using control or recovery devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... total organic HAP (or TOC, minus methane and ethane) emissions in all process vent streams and primary... TOC (minus methane and ethane) may be measured instead of total organic HAP. (C) The mass rates shall... and outlet of the control device shall be the sum of all total organic HAP (or TOC, minus methane and...

  5. A Case Study in Software Adaptation

    DTIC Science & Technology

    2002-01-01

    1 A Case Study in Software Adaptation Giuseppe Valetto Telecom Italia Lab Via Reiss Romoli 274 10148, Turin, Italy +39 011 2288788...configuration of the service; monitoring of database connectivity from within the service; monitoring of crashes and shutdowns of IM servers; monitoring of...of the IM server all share a relational database and a common runtime state repository, which make up the backend tier, and allow replicas to

  6. Quantifying the Effectiveness of Crowd-Sourced Serious Games

    DTIC Science & Technology

    2014-09-01

    of All Metrics Used in the Thesis . . . . . . . . . . . . . . 37 Table 5.1 Average DAU and MAU for Selected Mobile , Social, and Online Games...of Sample VeriGames . . . . . . . . . . . . . . . . . . . . 41 Table 5.4 ER of Some Mobile , Social and Online Games and Developers . . 41 Table 5.5 ER...a code segment. A backend verification engine then combines the assertions produced from all related game instances and tries to obtain conditions

  7. Crowdsourcing Physical Network Topology Mapping With Net.Tagger

    DTIC Science & Technology

    2016-03-01

    backend server infrastructure . This in- cludes a full security audit, better web services handling, and integration with the OSM stack and dataset to...a novel approach to network infrastructure mapping that combines smartphone apps with crowdsourced collection to gather data for offline aggregation...and analysis. The project aims to build a map of physical network infrastructure such as fiber-optic cables, facilities, and access points. The

  8. CrossTalk. The Journal of Defense Software Engineering. Volume 26, Number 5

    DTIC Science & Technology

    2013-10-01

    to a backend domain managed by the cyber criminal. Mobile bots can perform piggybacking on legitimate applications and steal data by controlling...technology infrastructure for managing identities, interfaces (web and/or mobile ), and agreements with service providers. The necessary capabilities and...platforms of unknown or dubious origin, global access by mobile (and largely insecure) devices, eroded trust boundaries, and the possibility of malevolent

  9. Investigating Quantum Modulation States

    DTIC Science & Technology

    2016-03-01

    Coherent state quantum data encryption is highly interoperable with current classical optical infrastructure in both fiber and free space optical networks...hub’s field of regard has a transmit/receive module that are endpoints of the Lyot filter stage tree within the hub’s backend electro-optics control... mobile airborne and space-borne networking. Just like any laser communication technology, QC links are affected by several sources of distortions

  10. Effective Use of Java Data Objects in Developing Database Applications; Advantages and Disadvantages

    DTIC Science & Technology

    2004-06-01

    DATA OBJECTS IN DEVELOPING DATABASE APPLICATIONS. ADVANTAGES AND DISADVANTAGES Paschalis Zilidis June 2004 Thesis Advisor: Thomas...Objects in Developing Database Applications. Advantages and Disadvantages 6. AUTHOR(S) Paschalis ZILIDIS 5. FUNDING NUMBERS 7. PERFORMING...database for the backend datastore. The major disadvantage of this approach is the well-known “impedance mismatch” in which some form of mapping is

  11. A computer-based time study system for timber harvesting operations

    Treesearch

    Jingxin Wang; Joe McNeel; John Baumgras

    2003-01-01

    A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...

  12. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  13. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    NASA Astrophysics Data System (ADS)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.

  14. Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances

    NASA Astrophysics Data System (ADS)

    Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.

    2017-12-01

    THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its scalability by allowing requests to be distributed to the backend workers instead of being served by a unique THREDDS worker. As a conclusion the proposed configuration supposes a significant improvement with respect to configurations based on non-collaborative THREDDS' instances.

  15. LQG/LTR Optimal Attitude Control of Small Flexible Spacecraft Using Free-Free Boundary Conditions

    DTIC Science & Technology

    2006-08-03

    particular on attitude control of flex- ible space structures. Croopnick et al .[50] present a literature survey in the areas of attitude control...modeling and control of space structures is compiled by Nurre et al .[161]. One important thing to note from the surveys listed above is the 21 focus on the...papers surveyed by Croopnick et al . in 1979, by Meirovitch in 1979, Balas in 1982, and Nurre et al . in 1984. The focus of the papers included in all

  16. Vane Pump Casing Machining of Dumpling Machine Based on CAD/CAM

    NASA Astrophysics Data System (ADS)

    Huang, Yusen; Li, Shilong; Li, Chengcheng; Yang, Zhen

    Automatic dumpling forming machine is also called dumpling machine, which makes dumplings through mechanical motions. This paper adopts the stuffing delivery mechanism featuring the improved and specially-designed vane pump casing, which can contribute to the formation of dumplings. Its 3D modeling in Pro/E software, machining process planning, milling path optimization, simulation based on UG and compiling post program were introduced and verified. The results indicated that adoption of CAD/CAM offers firms the potential to pursue new innovative strategies.

  17. Efficient processing of two-dimensional arrays with C or C++

    USGS Publications Warehouse

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  18. Understanding Portability of a High-Level Programming Model on Contemporary Heterogeneous Architectures

    DOE PAGES

    Sabne, Amit J.; Sakdhnagool, Putt; Lee, Seyong; ...

    2015-07-13

    Accelerator-based heterogeneous computing is gaining momentum in the high-performance computing arena. However, the increased complexity of heterogeneous architectures demands more generic, high-level programming models. OpenACC is one such attempt to tackle this problem. Although the abstraction provided by OpenACC offers productivity, it raises questions concerning both functional and performance portability. In this article, the authors propose HeteroIR, a high-level, architecture-independent intermediate representation, to map high-level programming models, such as OpenACC, to heterogeneous architectures. They present a compiler approach that translates OpenACC programs into HeteroIR and accelerator kernels to obtain OpenACC functional portability. They then evaluate the performance portability obtained bymore » OpenACC with their approach on 12 OpenACC programs on Nvidia CUDA, AMD GCN, and Intel Xeon Phi architectures. They study the effects of various compiler optimizations and OpenACC program settings on these architectures to provide insights into the achieved performance portability.« less

  19. A Multiprocessor SoC Architecture with Efficient Communication Infrastructure and Advanced Compiler Support for Easy Application Development

    NASA Astrophysics Data System (ADS)

    Urfianto, Mohammad Zalfany; Isshiki, Tsuyoshi; Khan, Arif Ullah; Li, Dongju; Kunieda, Hiroaki

    This paper presentss a Multiprocessor System-on-Chips (MPSoC) architecture used as an execution platform for the new C-language based MPSoC design framework we are currently developing. The MPSoC architecture is based on an existing SoC platform with a commercial RISC core acting as the host CPU. We extend the existing SoC with a multiprocessor-array block that is used as the main engine to run parallel applications modeled in our design framework. Utilizing several optimizations provided by our compiler, an efficient inter-communication between processing elements with minimum overhead is implemented. A host-interface is designed to integrate the existing RISC core to the multiprocessor-array. The experimental results show that an efficacious integration is achieved, proving that the designed communication module can be used to efficiently incorporate off-the-shelf processors as a processing element for MPSoC architectures designed using our framework.

  20. Optimization of the coherence function estimation for multi-core central processing unit

    NASA Astrophysics Data System (ADS)

    Cheremnov, A. G.; Faerman, V. A.; Avramchuk, V. S.

    2017-02-01

    The paper considers use of parallel processing on multi-core central processing unit for optimization of the coherence function evaluation arising in digital signal processing. Coherence function along with other methods of spectral analysis is commonly used for vibration diagnosis of rotating machinery and its particular nodes. An algorithm is given for the function evaluation for signals represented with digital samples. The algorithm is analyzed for its software implementation and computational problems. Optimization measures are described, including algorithmic, architecture and compiler optimization, their results are assessed for multi-core processors from different manufacturers. Thus, speeding-up of the parallel execution with respect to sequential execution was studied and results are presented for Intel Core i7-4720HQ и AMD FX-9590 processors. The results show comparatively high efficiency of the optimization measures taken. In particular, acceleration indicators and average CPU utilization have been significantly improved, showing high degree of parallelism of the constructed calculating functions. The developed software underwent state registration and will be used as a part of a software and hardware solution for rotating machinery fault diagnosis and pipeline leak location with acoustic correlation method.

  1. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  2. Thermal optimality of net ecosystem exchange of carbon dioxide and underlying mechanisms.

    PubMed

    Niu, Shuli; Luo, Yiqi; Fei, Shenfeng; Yuan, Wenping; Schimel, David; Law, Beverly E; Ammann, Christof; Arain, M Altaf; Arneth, Almut; Aubinet, Marc; Barr, Alan; Beringer, Jason; Bernhofer, Christian; Black, T Andrew; Buchmann, Nina; Cescatti, Alessandro; Chen, Jiquan; Davis, Kenneth J; Dellwik, Ebba; Desai, Ankur R; Etzold, Sophia; Francois, Louis; Gianelle, Damiano; Gielen, Bert; Goldstein, Allen; Groenendijk, Margriet; Gu, Lianhong; Hanan, Niall; Helfter, Carole; Hirano, Takashi; Hollinger, David Y; Jones, Mike B; Kiely, Gerard; Kolb, Thomas E; Kutsch, Werner L; Lafleur, Peter; Lawrence, David M; Li, Linghao; Lindroth, Anders; Litvak, Marcy; Loustau, Denis; Lund, Magnus; Marek, Michal; Martin, Timothy A; Matteucci, Giorgio; Migliavacca, Mirco; Montagnani, Leonardo; Moors, Eddy; Munger, J William; Noormets, Asko; Oechel, Walter; Olejnik, Janusz; Kyaw Tha Paw U; Pilegaard, Kim; Rambal, Serge; Raschi, Antonio; Scott, Russell L; Seufert, Günther; Spano, Donatella; Stoy, Paul; Sutton, Mark A; Varlagin, Andrej; Vesala, Timo; Weng, Ensheng; Wohlfahrt, Georg; Yang, Bai; Zhang, Zhongda; Zhou, Xuhui

    2012-05-01

    • It is well established that individual organisms can acclimate and adapt to temperature to optimize their functioning. However, thermal optimization of ecosystems, as an assemblage of organisms, has not been examined at broad spatial and temporal scales. • Here, we compiled data from 169 globally distributed sites of eddy covariance and quantified the temperature response functions of net ecosystem exchange (NEE), an ecosystem-level property, to determine whether NEE shows thermal optimality and to explore the underlying mechanisms. • We found that the temperature response of NEE followed a peak curve, with the optimum temperature (corresponding to the maximum magnitude of NEE) being positively correlated with annual mean temperature over years and across sites. Shifts of the optimum temperature of NEE were mostly a result of temperature acclimation of gross primary productivity (upward shift of optimum temperature) rather than changes in the temperature sensitivity of ecosystem respiration. • Ecosystem-level thermal optimality is a newly revealed ecosystem property, presumably reflecting associated evolutionary adaptation of organisms within ecosystems, and has the potential to significantly regulate ecosystem-climate change feedbacks. The thermal optimality of NEE has implications for understanding fundamental properties of ecosystems in changing environments and benchmarking global models. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  3. COLA: Optimizing Stream Processing Applications via Graph Partitioning

    NASA Astrophysics Data System (ADS)

    Khandekar, Rohit; Hildrum, Kirsten; Parekh, Sujay; Rajan, Deepak; Wolf, Joel; Wu, Kun-Lung; Andrade, Henrique; Gedik, Buğra

    In this paper, we describe an optimization scheme for fusing compile-time operators into reasonably-sized run-time software units called processing elements (PEs). Such PEs are the basic deployable units in System S, a highly scalable distributed stream processing middleware system. Finding a high quality fusion significantly benefits the performance of streaming jobs. In order to maximize throughput, our solution approach attempts to minimize the processing cost associated with inter-PE stream traffic while simultaneously balancing load across the processing hosts. Our algorithm computes a hierarchical partitioning of the operator graph based on a minimum-ratio cut subroutine. We also incorporate several fusion constraints in order to support real-world System S jobs. We experimentally compare our algorithm with several other reasonable alternative schemes, highlighting the effectiveness of our approach.

  4. Characterization of seven United States coal regions. The development of optimal terrace pit coal mining systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wimer, R.L.; Adams, M.A.; Jurich, D.M.

    1981-02-01

    This report characterizes seven United State coal regions in the Northern Great Plains, Rocky Mountain, Interior, and Gulf Coast coal provinces. Descriptions include those of the Fort Union, Powder River, Green River, Four Corners, Lower Missouri, Illinois Basin, and Texas Gulf coal resource regions. The resource characterizations describe geologic, geographic, hydrologic, environmental and climatological conditions of each region, coal ranks and qualities, extent of reserves, reclamation requirements, and current mining activities. The report was compiled as a basis for the development of hypothetical coal mining situations for comparison of conventional and terrace pit surface mining methods, under contract to themore » Department of Energy, Contract No. DE-AC01-79ET10023, entitled The Development of Optimal Terrace Pit Coal Mining Systems.« less

  5. BioMon: A Google Earth Based Continuous Biomass Monitoring System (Demo Paper)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vatsavai, Raju

    2009-01-01

    We demonstrate a Google Earth based novel visualization system for continuous monitoring of biomass at regional and global scales. This system is integrated with a back-end spatiotemporal data mining system that continuously detects changes using high temporal resolution MODIS images. In addition to the visualization, we demonstrate novel query features of the system that provides insights into the current conditions of the landscape.

  6. An Object-Oriented View of Backend Databases in a Mobile Environment for Navy and Marine Corps Applications

    DTIC Science & Technology

    2006-09-01

    Each of these layers will be described in more detail to include relevant technologies ( Java , PDA, Hibernate , and PostgreSQL) used to implement...Logic Layer -Object-Relational Mapper ( Hibernate ) Data 35 capable in order to interface with Java applications. Based on meeting the selection...further discussed. Query List Application Logic Layer HibernateApache - Java Servlet - Hibernate Interface -OR Mapper -RDBMS Interface

  7. European Space Software Repository ESSR

    NASA Astrophysics Data System (ADS)

    Livschitz, Jakob; Blommestijn, Robert

    2016-08-01

    The paper and presentation will present the status of the ESSR (European Space Software Repository), see [1]. It will describe the development phases, outline the web portal functionality and explain the process steps behind. Not only the front-end but also the back-end will be discussed.The ESSR web portal went live ESA internal on May 15th, 2015 and live world-wide September 19th, 2015. Currently the ESSR is in operations.

  8. Cloud-Based Perception and Control of Sensor Nets and Robot Swarms

    DTIC Science & Technology

    2016-04-01

    distributed stream processing framework provides the necessary API and infrastructure to develop and execute such applications in a cluster of computation...streaming DDDAS applications based on challenges they present to the backend Cloud control system. Figure 2 Parallel SLAM Application 3 1) Set of...the art deep learning- based object detectors can recognize among hundreds of object classes and this capability would be very useful for mobile

  9. A Combination Therapy of JO-I and Chemotherapy in Ovarian Cancer Models

    DTIC Science & Technology

    2013-10-01

    which consists of a 3PAR storage backend and is sharing data via a highly available NetApp storage gateway and 2 high throughput commodity storage...Environment is configured as self- service Enterprise cloud and currently hosts more than 700 virtual machines. The network infrastructure consists of...technology infrastructure and information system applications designed to integrate, automate, and standardize operations. These systems fuse state of

  10. Fuel Regression Characteristics of Cascaded Multistage Impinging-Jet (CAMUI) Type Hybrid Rocket

    NASA Astrophysics Data System (ADS)

    Itoh, Mitsunori; Maeda, Takenori; Kakikura, Akihito; Kaneko, Yudai; Mori, Kazuhiro; Nakashima, Takuji; Wakita, Masashi; Uematsu, Tsutomu; Totani, Tsuyoshi; Oshima, Nobuyuki; Nagata, Harunori

    A series of lab-scale firing tests was conducted to investigate the fuel regression characteristics of Cascaded Multistage Impinging-jet (CAMUI) type hybrid rocket. The alternative fuel grain used in this rocket consists of a number of cylindrical fuel blocks with two ports, which were aligned along the axis of the combustion chamber with a small gap. The ports are aligned staggered with respect to ones of neighboring blocks so that the combustion gas flow impinges on the forward-end surface of each block. In this fuel grain, forward-end surfaces, back-end surfaces and ports of fuel blocks contribute as burning surfaces. Polyethylene and LOX were used as a propellant, and the tests were conducted at the chamber pressure of 0.5 2MPa and the mass flux of 50 200kg/m2s. Main results obtained in this study are in the followings: The regression rate of each surface was obtained as a function of the propellant mass flux and local equivalent ratio of the combustion gas. At back-end surfaces the regression rate has a high sensitivity on the gap height of neighboring fuel blocks. These fuel regression characteristics will contribute as fundamental data to improve the optimum design of the fuel grain.

  11. Building an organic block storage service at CERN with Ceph

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel; Wiebalck, Arne

    2014-06-01

    Emerging storage requirements, such as the need for block storage for both OpenStack VMs and file services like AFS and NFS, have motivated the development of a generic backend storage service for CERN IT. The goals for such a service include (a) vendor neutrality, (b) horizontal scalability with commodity hardware, (c) fault tolerance at the disk, host, and network levels, and (d) support for geo-replication. Ceph is an attractive option due to its native block device layer RBD which is built upon its scalable, reliable, and performant object storage system, RADOS. It can be considered an "organic" storage solution because of its ability to balance and heal itself while living on an ever-changing set of heterogeneous disk servers. This work will present the outcome of a petabyte-scale test deployment of Ceph by CERN IT. We will first present the architecture and configuration of our cluster, including a summary of best practices learned from the community and discovered internally. Next the results of various functionality and performance tests will be shown: the cluster has been used as a backend block storage system for AFS and NFS servers as well as a large OpenStack cluster at CERN. Finally, we will discuss the next steps and future possibilities for Ceph at CERN.

  12. Rule-based topology system for spatial databases to validate complex geographic datasets

    NASA Astrophysics Data System (ADS)

    Martinez-Llario, J.; Coll, E.; Núñez-Andrés, M.; Femenia-Ribera, C.

    2017-06-01

    A rule-based topology software system providing a highly flexible and fast procedure to enforce integrity in spatial relationships among datasets is presented. This improved topology rule system is built over the spatial extension Jaspa. Both projects are open source, freely available software developed by the corresponding author of this paper. Currently, there is no spatial DBMS that implements a rule-based topology engine (considering that the topology rules are designed and performed in the spatial backend). If the topology rules are applied in the frontend (as in many GIS desktop programs), ArcGIS is the most advanced solution. The system presented in this paper has several major advantages over the ArcGIS approach: it can be extended with new topology rules, it has a much wider set of rules, and it can mix feature attributes with topology rules as filters. In addition, the topology rule system can work with various DBMSs, including PostgreSQL, H2 or Oracle, and the logic is performed in the spatial backend. The proposed topology system allows users to check the complex spatial relationships among features (from one or several spatial layers) that require some complex cartographic datasets, such as the data specifications proposed by INSPIRE in Europe and the Land Administration Domain Model (LADM) for Cadastral data.

  13. IEEE802.15.6 NB portable BAN clinic and M2M international standardization.

    PubMed

    Kuroda, Masahiro; Nohara, Yasunobu

    2013-01-01

    The increase of non communicable diseases (NCDs) will change the direction of health services to emphasize the role of preventive medicine in healthcare services. The first short-range medical body are network (BAN) standard IEEE802.15.6 is expected to be used for secure and user-friendly sensor devices for portable medical equipment. A BAN is an enabler for uploading medical data to a backend system for remote diagnoses and treatment. Machine-to-Machine (M2M) infrastructure is also a key technology for providing flexible and affordable services extending electronic health record (EHR) systems. This paper proposes a BAN-based portable clinic that collects health-check data from user-friendly medical devices and sensors and sends the data to a local backend server, and it evaluates the clinic in fields of actual usage. We discuss issues experienced from actual deployment of the system and focus on integrating it into upcoming healthcare M2M infrastructure to achieve affordable and dependable clinic services. We explain the components and workflow of the clinic and the system model. The system is set up at a temporary health center and has a network link to a remote medical help center. The paper concludes with our plan to introduce our system to contribute to internationally standardized preventive medicine.

  14. Treecode with a Special-Purpose Processor

    NASA Astrophysics Data System (ADS)

    Makino, Junichiro

    1991-08-01

    We describe an implementation of the modified Barnes-Hut tree algorithm for a gravitational N-body calculation on a GRAPE (GRAvity PipE) backend processor. GRAPE is a special-purpose computer for N-body calculations. It receives the positions and masses of particles from a host computer and then calculates the gravitational force at each coordinate specified by the host. To use this GRAPE processor with the hierarchical tree algorithm, the host computer must maintain a list of all nodes that exert force on a particle. If we create this list for each particle of the system at each timestep, the number of floating-point operations on the host and that on GRAPE would become comparable, and the increased speed obtained by using GRAPE would be small. In our modified algorithm, we create a list of nodes for many particles. Thus, the amount of the work required of the host is significantly reduced. This algorithm was originally developed by Barnes in order to vectorize the force calculation on a Cyber 205. With this algorithm, the computing time of the force calculation becomes comparable to that of the tree construction, if the GRAPE backend processor is sufficiently fast. The obtained speed-up factor is 30 to 50 for a RISC-based host computer and GRAPE-1A with a peak speed of 240 Mflops.

  15. TELICS—A Telescope Instrument Control System for Small/Medium Sized Astronomical Observatories

    NASA Astrophysics Data System (ADS)

    Srivastava, Mudit K.; Ramaprakash, A. N.; Burse, Mahesh P.; Chordia, Pravin A.; Chillal, Kalpesh S.; Mestry, Vilas B.; Das, Hillol K.; Kohok, Abhay A.

    2009-10-01

    For any modern astronomical observatory, it is essential to have an efficient interface between the telescope and its back-end instruments. However, for small and medium-sized observatories, this requirement is often limited by tight financial constraints. Therefore a simple yet versatile and low-cost control system is required for such observatories to minimize cost and effort. Here we report the development of a modern, multipurpose instrument control system TELICS (Telescope Instrument Control System) to integrate the controls of various instruments and devices mounted on the telescope. TELICS consists of an embedded hardware unit known as a common control unit (CCU) in combination with Linux-based data acquisition and user interface. The hardware of the CCU is built around the ATmega 128 microcontroller (Atmel Corp.) and is designed with a backplane, master-slave architecture. A Qt-based graphical user interface (GUI) has been developed and the back-end application software is based on C/C++. TELICS provides feedback mechanisms that give the operator good visibility and a quick-look display of the status and modes of instruments as well as data. TELICS has been used for regular science observations since 2008 March on the 2 m, f/10 IUCAA Telescope located at Girawali in Pune, India.

  16. Back-end of the fuel cycle - Indian scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wattal, P.K.

    Nuclear power has a key role in meeting the energy demands of India. This can be sustained by ensuring robust technology for the back end of the fuel cycle. Considering the modest indigenous resources of U and a huge Th reserve, India has adopted a three stage Nuclear Power Programme (NPP) based on 'closed fuel cycle' approach. This option on 'Recovery and Recycle' serves twin objectives of ensuring adequate supply of nuclear fuel and also reducing the long term radio-toxicity of the wastes. Reprocessing of the spent fuel by Purex process is currently employed. High Level Liquid Waste (HLW) generatedmore » during reprocessing is vitrified and undergoes interim storage. Back-end technologies are constantly modified to address waste volume minimization and radio-toxicity reduction. Long-term management of HLW in Indian context would involve partitioning of long lived minor actinides and recovery of valuable fission products specifically cesium. Recovery of minor actinides from HLW and its recycle is highly desirable for the sustained growth of India's NPPs. In this context, programme for developing and deploying partitioning technologies on industrial scale is pursued. The partitioned elements could be either transmuted in Fast Reactors (FRs)/Accelerated Driven Systems (ADS) as an integral part of sustainable Indian NPP. (authors)« less

  17. Gigabit Digital Filter Bank: Digital Backend Subsystem in the VERA Data-Acquisition System

    NASA Astrophysics Data System (ADS)

    Iguchi, Satoru; Kkurayama, Tomoharu; Kawaguchi, Noriyuki; Kawakami, Kazuyuki

    2005-02-01

    The VERA terminal is a new data-acquisition system developed for the VERA project, which is a project to construct a new Japanese VLBI array dedicated to make a 3-D map of our Milky Way Galaxy in terms of high-precision astrometry. New technology, a gigabit digital filter, was introduced in the development. The importance and advantages of a digital filter for radio astronomy have been studied as follows: (1) the digital filter can realize a variety of observation modes and maintain compatibility with different data-acquisition systems (Kiuchi et al. 1997 and Iguchi et al. 2000a), (2) the folding noise occurring in the sampling process can be reduced by combination with a higher-order sampling technique (Iguchi, Kawaguchi 2002), (3) and an ideal sharp cut-off bandedge and a flat amplitude/phase responses are approached by using a large number of taps available to use LSI of a large number of logic cells (Iguchi et al. 2000a). We developed the custom Finite Impulse Response filter chips and manufactured the Gigabit Digital Filter Banks (GDFBs) as a digital backend subsystem in the VERA terminal. In this paper, the design and development of the GDFB are presented in detail, and the performances and demonstrations of the developed GDFB are shown.

  18. EEG acquisition system based on active electrodes with common-mode interference suppression by Driving Right Leg circuit.

    PubMed

    Guermandi, Marco; Bigucci, Alessandro; Franchi Scarselli, Eleonora; Guerrieri, Roberto

    2015-01-01

    We present a system for the acquisition of EEG signals based on active electrodes and implementing a Driving Right Leg circuit (DgRL). DgRL allows for single-ended amplification and analog-to-digital conversion, still guaranteeing a common mode rejection in excess of 110 dB. This allows the system to acquire high-quality EEG signals essentially removing network interference for both wet and dry-contact electrodes. The front-end amplification stage is integrated on the electrode, minimizing the system's sensitivity to electrode contact quality, cable movement and common mode interference. The A/D conversion stage can be either integrated in the remote back-end or placed on the head as well, allowing for an all-digital communication to the back-end. Noise integrated in the band from 0.5 to 100 Hz is comprised between 0.62 and 1.3 μV, depending on the configuration. Current consumption for the amplification and A/D conversion of one channel is 390 μA. Thanks to its low noise, the high level of interference suppression and its quick setup capabilities, the system is particularly suitable for use outside clinical environments, such as in home care, brain-computer interfaces or consumer-oriented applications.

  19. Implementation of a direct-imaging and FX correlator for the BEST-2 array

    NASA Astrophysics Data System (ADS)

    Foster, G.; Hickish, J.; Magro, A.; Price, D.; Zarb Adami, K.

    2014-04-01

    A new digital backend has been developed for the Basic Element for SKA Training II (BEST-2) array at Radiotelescopi di Medicina, INAF-IRA, Italy, which allows concurrent operation of an FX correlator, and a direct-imaging correlator and beamformer. This backend serves as a platform for testing some of the spatial Fourier transform concepts which have been proposed for use in computing correlations on regularly gridded arrays. While spatial Fourier transform-based beamformers have been implemented previously, this is, to our knowledge, the first time a direct-imaging correlator has been deployed on a radio astronomy array. Concurrent observations with the FX and direct-imaging correlator allow for direct comparison between the two architectures. Additionally, we show the potential of the direct-imaging correlator for time-domain astronomy, by passing a subset of beams though a pulsar and transient detection pipeline. These results provide a timely verification for spatial Fourier transform-based instruments that are currently in commissioning. These instruments aim to detect highly redshifted hydrogen from the epoch of reionization and/or to perform wide-field surveys for time-domain studies of the radio sky. We experimentally show the direct-imaging correlator architecture to be a viable solution for correlation and beamforming.

  20. Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey

    NASA Astrophysics Data System (ADS)

    Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.

    2018-01-01

    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.

  1. RFI in the 0.5 to 10.8 GHz Band at the Allen Telescope Array

    NASA Astrophysics Data System (ADS)

    Backus, Peter R.; Kilsdonk, T. N.; Allen Telescope Array Team

    2007-05-01

    Thanks to funding from the Paul G. Allen Foundation (and other philanthropic supporters) for the technology development and first phase of construction, the first 42 elements of the Allen Telescope Array (ATA-42) are being commissioned for rapid surveys of the astrophysical and technological sky. Because of the innovative design of this array that will eventually include 350 elements, traditional radio astronomy and SETI are enabled simultaneously 24x7. The array has been designed to provide an optimal snapshot image of a very large field of view and simultaneously, 16 (dual polarization) phased beams within the field of view to be analyzed by a suite of backend processors. Four independent 100 MHz bands may be tuned anywhere within the instantaneous receiver bandwidth from 0.5 to 11.2 GHz. One key to the success of rapid surveys for astrophysical or technological signals is a quiet background. This poster presents the results of initial surveys with 6.1 meter dishes at high-spectral-resolution of the background spectrum from 0.5 to 10.8 GHz at the Hat Creek Radio Observatory, where the ATA is being constructed, and compares it with the background spectrum from 1.2-3 GHz at other observatories where SETI observations have been conducted within the past 11 years.

  2. GETPrime 2.0: gene- and transcript-specific qPCR primers for 13 species including polymorphisms

    PubMed Central

    David, Fabrice P.A.; Rougemont, Jacques; Deplancke, Bart

    2017-01-01

    GETPrime (http://bbcftools.epfl.ch/getprime) is a database with a web frontend providing gene- and transcript-specific, pre-computed qPCR primer pairs. The primers have been optimized for genome-wide specificity and for allowing the selective amplification of one or several splice variants of most known genes. To ease selection, primers have also been ranked according to defined criteria such as genome-wide specificity (with BLAST), amplicon size, and isoform coverage. Here, we report a major upgrade (2.0) of the database: eight new species (yeast, chicken, macaque, chimpanzee, rat, platypus, pufferfish, and Anolis carolinensis) now complement the five already included in the previous version (human, mouse, zebrafish, fly, and worm). Furthermore, the genomic reference has been updated to Ensembl v81 (while keeping earlier versions for backward compatibility) as a result of re-designing the back-end database and automating the import of relevant sections of the Ensembl database in species-independent fashion. This also allowed us to map known polymorphisms to the primers (on average three per primer for human), with the aim of reducing experimental error when targeting specific strains or individuals. Another consequence is that the inclusion of future Ensembl releases and other species has now become a relatively straightforward task. PMID:28053161

  3. Effect of UV curing time on physical and electrical properties and reliability of low dielectric constant materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kao, Kai-Chieh; Cheng, Yi-Lung, E-mail: yjcheng@ncnu.edu.tw; Chang, Wei-Yuan

    2014-11-01

    This study comprehensively investigates the effect of ultraviolet (UV) curing time on the physical, electrical, and reliability characteristics of porous low-k materials. Following UV irradiation for various periods, the depth profiles of the chemical composition in the low-k dielectrics were homogeneous. Initially, the UV curing process preferentially removed porogen-related CH{sub x} groups and then modified Si-CH{sub 3} and cage Si-O bonds to form network Si-O bonds. The lowest dielectric constant (k value) was thus obtained at a UV curing time of 300 s. Additionally, UV irradiation made porogen-based low-k materials hydrophobic and to an extent that increased with UV curing time.more » With a short curing time (<300 s), porogen was not completely removed and the residues degraded reliability performance. A long curing time (>300 s) was associated with improved mechanical strength, electrical performance, and reliability of the low-k materials, but none of these increased linearly with UV curing time. Therefore, UV curing is necessary, but the process time must be optimized for porous low-k materials on back-end of line integration in 45 nm or below technology nodes.« less

  4. An end-to-end workflow for engineering of biological networks from high-level specifications.

    PubMed

    Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun

    2012-08-17

    We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.

  5. Development of informational-communicative system, created to improve medical help for family medicine doctors.

    PubMed

    Smiianov, Vladyslav A; Dryha, Natalia O; Smiianova, Olha I; Obodyak, Victor K; Zudina, Tatyana O

    2018-01-01

    Introduction: Today mobile health`s protection service has no concrete meaning. As an research object it was called mHealth and named by Global observatory of electronic health`s protection as "Doctor and social health practice that can be supported by any mobile units (mobile phones or smartphones), units for patient`s health control, personal computers and other units of non-wired communication". An active usage of SMS in programs for patients` cure regimen keeping was quiet predictable. Mobile and electronic units only begin their development in medical sphere. Thus, to solve all health`s protection system reformation problems a special memorandum about cooperation in creating E-Health system in Ukraine was signed. The aim: Development of ICS for monitoring and non-infection ill patients` informing system optimization as a first level of medical help. Materials and methods: During research, we used systematical approach, meta-analysis, informational-analytical systems` schemes projection, expositive modeling. Developing the backend (server part of the site), we used next technologies: 1) the Apache web server; 2) programming language PHP; 3) Yii 2 PHP Framework. In the frontend developing were used the following technologies (client part of the site): 1) Bootstrap 3; 2) Vue JS Framework. Results and conclusions: Created duo-channel system "doctor-patient" and "patient-doctor" will allow usual doctors of family medicine (DFM) take the interactive dispensary cure and avoid uncontrolled illness progress. Doctor will monitor basic physical data of patient`s health and curing process. The main goal is to create automatic system to allow doctor regularly write periodical or non-periodical notifications, get patients` questioning answers and spread information between doctor and patient; that will optimize work of DFMs.

  6. Spiral: Automated Computing for Linear Transforms

    NASA Astrophysics Data System (ADS)

    Püschel, Markus

    2010-09-01

    Writing fast software has become extraordinarily difficult. For optimal performance, programs and their underlying algorithms have to be adapted to take full advantage of the platform's parallelism, memory hierarchy, and available instruction set. To make things worse, the best implementations are often platform-dependent and platforms are constantly evolving, which quickly renders libraries obsolete. We present Spiral, a domain-specific program generation system for important functionality used in signal processing and communication including linear transforms, filters, and other functions. Spiral completely replaces the human programmer. For a desired function, Spiral generates alternative algorithms, optimizes them, compiles them into programs, and intelligently searches for the best match to the computing platform. The main idea behind Spiral is a mathematical, declarative, domain-specific framework to represent algorithms and the use of rewriting systems to generate and optimize algorithms at a high level of abstraction. Experimental results show that the code generated by Spiral competes with, and sometimes outperforms, the best available human-written code.

  7. A dynamic model for costing disaster mitigation policies.

    PubMed

    Altay, Nezih; Prasad, Sameer; Tata, Jasmine

    2013-07-01

    The optimal level of investment in mitigation strategies is usually difficult to ascertain in the context of disaster planning. This research develops a model to provide such direction by relying on cost of quality literature. This paper begins by introducing a static approach inspired by Joseph M. Juran's cost of quality management model (Juran, 1951) to demonstrate the non-linear trade-offs in disaster management expenditure. Next it presents a dynamic model that includes the impact of dynamic interactions of the changing level of risk, the cost of living, and the learning/investments that may alter over time. It illustrates that there is an optimal point that minimises the total cost of disaster management, and that this optimal point moves as governments learn from experience or as states get richer. It is hoped that the propositions contained herein will help policymakers to plan, evaluate, and justify voluntary disaster mitigation expenditures. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  8. The design and implementation of a parallel unstructured Euler solver using software primitives

    NASA Technical Reports Server (NTRS)

    Das, R.; Mavriplis, D. J.; Saltz, J.; Gupta, S.; Ponnusamy, R.

    1992-01-01

    This paper is concerned with the implementation of a three-dimensional unstructured grid Euler-solver on massively parallel distributed-memory computer architectures. The goal is to minimize solution time by achieving high computational rates with a numerically efficient algorithm. An unstructured multigrid algorithm with an edge-based data structure has been adopted, and a number of optimizations have been devised and implemented in order to accelerate the parallel communication rates. The implementation is carried out by creating a set of software tools, which provide an interface between the parallelization issues and the sequential code, while providing a basis for future automatic run-time compilation support. Large practical unstructured grid problems are solved on the Intel iPSC/860 hypercube and Intel Touchstone Delta machine. The quantitative effect of the various optimizations are demonstrated, and we show that the combined effect of these optimizations leads to roughly a factor of three performance improvement. The overall solution efficiency is compared with that obtained on the CRAY-YMP vector supercomputer.

  9. Lanczos eigensolution method for high-performance computers

    NASA Technical Reports Server (NTRS)

    Bostic, Susan W.

    1991-01-01

    The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.

  10. Exploring the Cost and Functionality of MEDCOM Web Services

    DTIC Science & Technology

    2005-10-24

    Software Name 24. What backend database software supports your intranet/Internet content? (check all that apply)-. o Oracle o Microsoft SQL Server E0...Department of Defense (DoD) service branches, which funded and deployed an Internet portal, TRICARE Online, to serve as an information conduit between the...public website, the information contained on the intranet is traditionally limited to the members of the hosting command. The local information serves as

  11. Survey of Collaboration Technologies in Multi-level Security Environments

    DTIC Science & Technology

    2014-04-28

    infrastructure or resources. In this research program, the security implications of the US Air Force GeoBase (the US The problem is that in many cases...design structure. ORA uses a Java interface for ease of use, and a C++ computational backend . The current version ORA1.2 software is available on the...information: culture, policy, governance, economics and resources, and technology and infrastructure . This plan, the DoD Information Sharing

  12. Covariance and Uncertainty Realism in Space Surveillance and Tracking

    DTIC Science & Technology

    2016-06-27

    control infrastructure , there are also further complications in the implementation of centralized scheduling of some of the SSN sensors due to their...this data however. 5.8.3 Long-Term Long-term developments of JSpOC processing, net-centric interfaces and sensor backends will provide the...with particle filters for mobile sensor network control. In Proceedings of the 45th IEEE Conference on Decision and Control, pages 1019–1024, December

  13. Liberating Virtual Machines from Physical Boundaries through Execution Knowledge

    DTIC Science & Technology

    2015-12-01

    trivial infrastructures such as VM distribution networks, clients need to wait for an extended period of time before launching a VM. In cloud settings...hardware support. MobiDesk [28] efficiently supports virtual desktops in mobile environments by decou- pling the user’s workload from host systems and...experiment set-up. VMs are migrated between a pair of source and destination hosts, which are connected through a backend 10 Gbps network for

  14. Mapping, Awareness, and Virtualization Network Administrator Training Tool (MAVNATT) Architecture and Framework

    DTIC Science & Technology

    2015-06-01

    unit may setup and teardown the entire tactical infrastructure multiple times per day. This tactical network administrator training is a critical...language and runs on Linux and Unix based systems. All provisioning is based around the Nagios Core application, a powerful backend solution for network...start up a large number of virtual machines quickly. CORE supports the simulation of fixed and mobile networks. CORE is open-source, written in Python

  15. Common Ground: An Interactive Visual Exploration and Discovery for Complex Health Data

    DTIC Science & Technology

    2014-04-01

    annotate other ontologies for the visual interface client. Finally, we are actively working on software development of both a backend server and the...the following infrastructure and resources. For the development and management of the ontologies, we installed a framework consisting of a server...that is being developed by Google. Using these 9 technologies, we developed an HTML5 client that runs on Windows, Mac OSX, Linux and mobile systems

  16. Motion Simulation Research Related Short Term Training Attachment to TARDEC

    DTIC Science & Technology

    2013-04-01

    CASSI group has five main areas of focus, which are, ground vehicle power and mobility , vehicle electronics and architecture, intelligent ground...control, steering as well as seats can all be changed to mock the necessary vehicle. Originally it was designed for a High Mobility Multipurpose Wheeled...necessary outputs to the motion base. SimCreator is a software package, similar to SimuLink. Most of the backend coding is done in C++. RTI accounts

  17. Cloud-Based Distributed Control of Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to

  18. ORA User’s Guide 2007

    DTIC Science & Technology

    2007-07-01

    July 2007 CMU-ISRI-07-115 Institute for Software Research School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213...ORA uses a Java interface for ease of use, and a C++ computational backend. The current version ORA1.2 software is available on the CASOS website...06-1-0104, N00014-06-1-0921, the AFOSR for “ Computational Modeling of Cultural Dimensions in Adversary Organization (MURI)”, the ARL for Assessing C2

  19. Proceedings of the 6th annual Speakeasy conference. [Chicago, August 17-18, 1978

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1978-01-01

    This meeting on the Speakeasy programming language and its applications included papers on the following subjects: graphics (graphics under Speakeasy, Speakeasy on a mini, color graphics), time series (OASIS - a user-oriented system at USDA, writing input-burdened linkules), applications (weather and crop yield analysis system, property investment analysis system), data bases under Speakeasy (relational data base, applications of relational data bases), survey analysis (survey analysis package from Liege, sic and its future under Speakeasy), and new features in Speakeasy (partial differential equations, the Speakeasy compiler and optimization). (RWR)

  20. Simulated single molecule microscopy with SMeagol.

    PubMed

    Lindén, Martin; Ćurić, Vladimir; Boucharin, Alexis; Fange, David; Elf, Johan

    2016-08-01

    SMeagol is a software tool to simulate highly realistic microscopy data based on spatial systems biology models, in order to facilitate development, validation and optimization of advanced analysis methods for live cell single molecule microscopy data. SMeagol runs on Matlab R2014 and later, and uses compiled binaries in C for reaction-diffusion simulations. Documentation, source code and binaries for Mac OS, Windows and Ubuntu Linux can be downloaded from http://smeagol.sourceforge.net johan.elf@icm.uu.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  1. A Mathematical Approach for Compiling and Optimizing Hardware Implementations of DSP Transforms

    DTIC Science & Technology

    2010-08-01

    FPGA throughput [billion samples per second] performance [ Gflop /s] 0 30 60 90 120 150 0 1 2 3 4 5 0 5,000 10,000 15,000 20,000 25,000...30,000 35,000 40,000 45,000 area [slices] DFT 64 (floating point) on Xilinx Virtex-6 FPGA throughput [billion samples per second] performance [ Gflop ...Virtex-6 FPGA throughput [billion samples per second] performance [ Gflop /s] 0 50 100 150 200 250 0 1 2 3 4 5 0 10,000 20,000 30,000 40,000

  2. Understanding the Cray X1 System

    NASA Technical Reports Server (NTRS)

    Cheung, Samson

    2004-01-01

    This paper helps the reader understand the characteristics of the Cray X1 vector supercomputer system, and provides hints and information to enable the reader to port codes to the system. It provides a comparison between the basic performance of the X1 platform and other platforms that are available at NASA Ames Research Center. A set of codes, solving the Laplacian equation with different parallel paradigms, is used to understand some features of the X1 compiler. An example code from the NAS Parallel Benchmarks is used to demonstrate performance optimization on the X1 platform.

  3. Serogroup C Neisseria meningitidis invasive infection: analysis of the possible vaccination strategies for a mass campaign.

    PubMed

    Chiappini, Elena; Venturini, Elisabetta; Bonsignori, Francesca; Galli, Luisa; de Martino, Maurizio

    2010-11-01

    The serogroup C meningococcal conjugate vaccine is available since 1999. In the absence of randomized controlled trials that support a specific schedule, each country has adopted different vaccination programmes. Hereby, we analyse positive and negative aspects of the different vaccination strategies. While waiting for the introduction of other antimeningococcal vaccines, covering also for the Group B meningococci, further studies on effectiveness of an optimal schedule to be adopted in European countries are needed. © 2010 The Author(s)/Journal Compilation © 2010 Foundation Acta Paediatrica.

  4. Availability of Neutronics Benchmarks in the ICSBEP and IRPhEP Handbooks for Computational Tools Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Briggs, J. Blair; Ivanova, Tatiana

    2017-02-01

    In the past several decades, numerous experiments have been performed worldwide to support reactor operations, measurements, design, and nuclear safety. Those experiments represent an extensive international investment in infrastructure, expertise, and cost, representing significantly valuable resources of data supporting past, current, and future research activities. Those valuable assets represent the basis for recording, development, and validation of our nuclear methods and integral nuclear data [1]. The loss of these experimental data, which has occurred all too much in the recent years, is tragic. The high cost to repeat many of these measurements can be prohibitive, if not impossible, to surmount.more » Two international projects were developed, and are under the direction of the Organisation for Co-operation and Development Nuclear Energy Agency (OECD NEA) to address the challenges of not just data preservation, but evaluation of the data to determine its merit for modern and future use. The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was established to identify and verify comprehensive critical benchmark data sets; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data [2]. Similarly, the International Reactor Physics Experiment Evaluation Project (IRPhEP) was established to preserve integral reactor physics experimental data, including separate or special effects data for nuclear energy and technology applications [3]. Annually, contributors from around the world continue to collaborate in the evaluation and review of select benchmark experiments for preservation and dissemination. The extensively peer-reviewed integral benchmark data can then be utilized to support nuclear design and safety analysts to validate the analytical tools, methods, and data needed for next-generation reactor design, safety analysis requirements, and all other front- and back-end activities contributing to the overall nuclear fuel cycle where quality neutronics calculations are paramount.« less

  5. Ada (Trade Name) Compiler Validation Summary Report. Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800.

    DTIC Science & Technology

    1987-04-30

    AiBI 895 ADA (TRADENNANE) COMPILER VALIDATION SUMMARY REPORT / HARRIS CORPORATION HA (U) INFORMATION SYSTEMS AND TECHNOLOGY CENTER W-P AFS OH ADA...Compiler Validation Summary Report : 30 APR 1986 to 30 APR 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800 6...the United States Government (Ada Joint Program Office). Adae Compiler Validation mary Report : Compiler Name: HARRIS Ada Compiler, Version 1.0 1 Host

  6. Ada (Tradename) Compiler Validation Summary Report. Harris Corporation. Harris Ada Compiler, Version 1.0. Harris H700 and H60.

    DTIC Science & Technology

    1986-06-28

    Report : 28 JUN 1986 to 28 JUN 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H700 and H60 6. PERFORMING ORG. REPORT ...CLASSIFICATION OF THIS PAGE (When Oata Entered) .. . • -- 7 1. -SUPPLEMENTARYNOTES Ada ® Compiler Validation Summary Report : Compiler Name: HARRIS Ada Compiler...AVF-VSR-43.1086 Ada® COMPILER VALIDATION SUMMARY REPORT : Harris Corporation HARRIS Ada Compiler, Version 1.0 Harris H700 and H60 Completion of

  7. CIL: Compiler Implementation Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gries, David

    1969-03-01

    This report is a manual for the proposed Compiler Implementation Language, CIL. It is not an expository paper on the subject of compiler writing or compiler-compilers. The language definition may change as work progresses on the project. It is designed for writing compilers for the IBM 360 computers.

  8. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  9. A comprehensive and scalable database search system for metaproteomics.

    PubMed

    Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W

    2016-08-16

    Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbara Chapman

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close tomore » DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.« less

  11. Functional Programming with C++ Template Metaprograms

    NASA Astrophysics Data System (ADS)

    Porkoláb, Zoltán

    Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.

  12. PolyCheck: Dynamic Verification of Iteration Space Transformations on Affine Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Wenlei; Krishnamoorthy, Sriram; Pouchet, Louis-noel

    2016-01-11

    High-level compiler transformations, especially loop transformations, are widely recognized as critical optimizations to restructure programs to improve data locality and expose parallelism. Guaranteeing the correctness of program transformations is essential, and to date three main approaches have been developed: proof of equivalence of affine programs, matching the execution traces of programs, and checking bit-by-bit equivalence of the outputs of the programs. Each technique suffers from limitations in either the kind of transformations supported, space complexity, or the sensitivity to the testing dataset. In this paper, we take a novel approach addressing all three limitations to provide an automatic bug checkermore » to verify any iteration reordering transformations on affine programs, including non-affine transformations, with space consumption proportional to the original program data, and robust to arbitrary datasets of a given size. We achieve this by exploiting the structure of affine program control- and data-flow to generate at compile-time lightweight checker code to be executed within the transformed program. Experimental results assess the correctness and effectiveness of our method, and its increased coverage over previous approaches.« less

  13. Implementation of a Multimodal Mobile System for Point-of-Sale Surveillance: Lessons Learned From Case Studies in Washington, DC, and New York City.

    PubMed

    Cantrell, Jennifer; Ganz, Ollie; Ilakkuvan, Vinu; Tacelosky, Michael; Kreslake, Jennifer; Moon-Howard, Joyce; Aidala, Angela; Vallone, Donna; Anesetti-Rothermel, Andrew; Kirchner, Thomas R

    2015-01-01

    In tobacco control and other fields, point-of-sale surveillance of the retail environment is critical for understanding industry marketing of products and informing public health practice. Innovations in mobile technology can improve existing, paper-based surveillance methods, yet few studies describe in detail how to operationalize the use of technology in public health surveillance. The aims of this paper are to share implementation strategies and lessons learned from 2 tobacco, point-of-sale surveillance projects to inform and prepare public health researchers and practitioners to implement new mobile technologies in retail point-of-sale surveillance systems. From 2011 to 2013, 2 point-of-sale surveillance pilot projects were conducted in Washington, DC, and New York, New York, to capture information about the tobacco retail environment and test the feasibility of a multimodal mobile data collection system, which included capabilities for audio or video recording data, electronic photographs, electronic location data, and a centralized back-end server and dashboard. We established a preimplementation field testing process for both projects, which involved a series of rapid and iterative tests to inform decisions and establish protocols around key components of the project. Important components of field testing included choosing a mobile phone that met project criteria, establishing an efficient workflow and accessible user interfaces for each component of the system, training and providing technical support to fieldworkers, and developing processes to integrate data from multiple sources into back-end systems that can be utilized in real-time. A well-planned implementation process is critical for successful use and performance of multimodal mobile surveillance systems. Guidelines for implementation include (1) the need to establish and allow time for an iterative testing framework for resolving technical and logistical challenges; (2) developing a streamlined workflow and user-friendly interfaces for data collection; (3) allowing for ongoing communication, feedback, and technology-related skill-building among all staff; and (4) supporting infrastructure for back-end data systems. Although mobile technologies are evolving rapidly, lessons learned from these case studies are essential for ensuring that the many benefits of new mobile systems for rapid point-of-sale surveillance are fully realized.

  14. Implementation of a Multimodal Mobile System for Point-of-Sale Surveillance: Lessons Learned From Case Studies in Washington, DC, and New York City

    PubMed Central

    Ganz, Ollie; Ilakkuvan, Vinu; Tacelosky, Michael; Kreslake, Jennifer; Moon-Howard, Joyce; Aidala, Angela; Vallone, Donna; Anesetti-Rothermel, Andrew; Kirchner, Thomas R

    2015-01-01

    Background In tobacco control and other fields, point-of-sale surveillance of the retail environment is critical for understanding industry marketing of products and informing public health practice. Innovations in mobile technology can improve existing, paper-based surveillance methods, yet few studies describe in detail how to operationalize the use of technology in public health surveillance. Objective The aims of this paper are to share implementation strategies and lessons learned from 2 tobacco, point-of-sale surveillance projects to inform and prepare public health researchers and practitioners to implement new mobile technologies in retail point-of-sale surveillance systems. Methods From 2011 to 2013, 2 point-of-sale surveillance pilot projects were conducted in Washington, DC, and New York, New York, to capture information about the tobacco retail environment and test the feasibility of a multimodal mobile data collection system, which included capabilities for audio or video recording data, electronic photographs, electronic location data, and a centralized back-end server and dashboard. We established a preimplementation field testing process for both projects, which involved a series of rapid and iterative tests to inform decisions and establish protocols around key components of the project. Results Important components of field testing included choosing a mobile phone that met project criteria, establishing an efficient workflow and accessible user interfaces for each component of the system, training and providing technical support to fieldworkers, and developing processes to integrate data from multiple sources into back-end systems that can be utilized in real-time. Conclusions A well-planned implementation process is critical for successful use and performance of multimodal mobile surveillance systems. Guidelines for implementation include (1) the need to establish and allow time for an iterative testing framework for resolving technical and logistical challenges; (2) developing a streamlined workflow and user-friendly interfaces for data collection; (3) allowing for ongoing communication, feedback, and technology-related skill-building among all staff; and (4) supporting infrastructure for back-end data systems. Although mobile technologies are evolving rapidly, lessons learned from these case studies are essential for ensuring that the many benefits of new mobile systems for rapid point-of-sale surveillance are fully realized. PMID:27227138

  15. CHIME and probing the origin of fast radio bursts

    NASA Astrophysics Data System (ADS)

    Connor, Liam Dean

    The time-variable long-wavelength sky harbours a number of known but unsolved astrophysical problems, and surely many more undiscovered phenomena. With modern tools such problems will become tractable, and new classes of astronomical objects will be revealed. These tools include digital telescopes made from powerful computing clusters, and improved theoretical methods. In this thesis we employ such devices to understand better several puzzles in the time-domain radio sky. Our primary focus is on the origin of fast radio bursts (FRBs), a new class of transients of which there seem to be thousands per sky per day. We offer a model in which FRBs are extragalactic but non-cosmological pulsars in young supernova remnants. Since this theoretical work was done, observations have corroborated the picture of FRBs as young rotating neutron stars, including the non-Poissonian repetition of FRB 121102. We also present statistical arguments regarding the nature and location of FRBs. These include reinstituting the classic V/Vmax-test to measure the brightness distribution of FRBs, i.e., constraining ∂log N/∂log S. We find consistency with a Euclidean distribution. This means current observations cannot distinguish between a cosmological population and a more local uniform population, unless added assumptions are made. We also showed that the rate of FRBs at low frequencies is consistent with the rate at 1.4 GHz, which is promising for upcoming high-impact experiments. One of these is the Canadian Hydrogen Intensity Mapping Experiment (CHIME). We outline this instrument and its three back-ends: a cosmology experiment whose goal is to measure dark energy through 21 cm intensity mapping, a pulsar back-end, and an FRB project that is expected to be by far the fastest survey in the foreseeable future. We describe the creation of a digital beamforming back-end on the CHIME Pathfinder, which acts as a test-bed for the three final experiments just described. We also discuss the commissioning of a 24/7 real-time VLBI FRB search between the Pathfinder's synthetic beam and the Algonquin Radio Observatory (ARO) 46 m telescope, including early results. Finally, we present a study of the microstructure in B0329+54's individual pulses in full-polarization and present results on its quasi-periodic structure.

  16. Algorithm for fast event parameters estimation on GEM acquired data

    NASA Astrophysics Data System (ADS)

    Linczuk, Paweł; Krawczyk, Rafał D.; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Wojeński, Andrzej; Chernyshova, Maryna; Czarski, Tomasz

    2016-09-01

    We present study of a software-hardware environment for developing fast computation with high throughput and low latency methods, which can be used as back-end in High Energy Physics (HEP) and other High Performance Computing (HPC) systems, based on high amount of input from electronic sensor based front-end. There is a parallelization possibilities discussion and testing on Intel HPC solutions with consideration of applications with Gas Electron Multiplier (GEM) measurement systems presented in this paper.

  17. Modeling and Simulation Behavior Validation Methodology and Extension Model Validation for the Individual Soldier

    DTIC Science & Technology

    2015-03-01

    domains. Major model functions include: • Ground combat: Light and heavy forces. • Air mobile forces. • Future forces. • Fixed-wing and rotary-wing...Constraints: • Study must be completed no later than 31 December 2014. • Entity behavior limited to select COMBATXXI Mobility , Unmanned Aerial System...and SQL backend , as well as any open application programming interface API. • Allows data transparency and data driven navigation through the model

  18. Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging

    DTIC Science & Technology

    2016-01-01

    satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide

  19. Development and ESCC evaluation of a monolithic silicon phototransistor array for optical encoders

    NASA Astrophysics Data System (ADS)

    Bregoli, M.; Ceriani, S.; Erspan, M.; Collini, A.; Ficorella, F.; Giacomini, G.; Bellutti, P.; How, L. S.; Hernandez, S.; Lundmark, K.

    2017-11-01

    Optoelettronica Italia Srl, better known as Optoi, is an Italian Company dealing with optoelectronics and microelectronics and focusing on back-end technologies. The growing volume of activities concerning the aerospace field has recently brought to the creation of a company unit, with collaborations with ESA, CNES and ASI. In this context, Optoi's key partner for the microelectronic front-end is Fondazione Bruno Kessler (FBK) and specifically its Micro Nano Facility (MNF).

  20. Tools for Modeling & Simulation of Molecular and Nanoelectronics Devices

    DTIC Science & Technology

    2012-06-14

    implemented a prototype DFT simulation software using two different open source Finite Element (FE) libraries: DEALII and FENICS . These two libraries have been...ATK. In the first part of this Phase I project we investigated two different candidate finite element libraries, DEAL II and FENICS . Although both...element libraries, Deal.II and FEniCS /dolfin, for use as back-ends to a finite element DFT in ATK, Quantum Insight and QuantumWise A/S, October 2011.

  1. DataSpread: Unifying Databases and Spreadsheets.

    PubMed

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-08-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.

  2. DataSpread: Unifying Databases and Spreadsheets

    PubMed Central

    Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya

    2015-01-01

    Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current “pane” (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases. PMID:26900487

  3. mREST Interface Specification

    NASA Technical Reports Server (NTRS)

    McCartney, Patrick; MacLean, John

    2012-01-01

    mREST is an implementation of the REST architecture specific to the management and sharing of data in a system of logical elements. The purpose of this document is to clearly define the mREST interface protocol. The interface protocol covers all of the interaction between mREST clients and mREST servers. System-level requirements are not specifically addressed. In an mREST system, there are typically some backend interfaces between a Logical System Element (LSE) and the associated hardware/software system. For example, a network camera LSE would have a backend interface to the camera itself. These interfaces are specific to each type of LSE and are not covered in this document. There are also frontend interfaces that may exist in certain mREST manager applications. For example, an electronic procedure execution application may have a specialized interface for configuring the procedures. This interface would be application specific and outside of this document scope. mREST is intended to be a generic protocol which can be used in a wide variety of applications. A few scenarios are discussed to provide additional clarity but, in general, application-specific implementations of mREST are not specifically addressed. In short, this document is intended to provide all of the information necessary for an application developer to create mREST interface agents. This includes both mREST clients (mREST manager applications) and mREST servers (logical system elements, or LSEs).

  4. Versatile Stimulation Back-End With Programmable Exponential Current Pulse Shapes for a Retinal Visual Prosthesis.

    PubMed

    Maghami, Mohammad Hossein; Sodagar, Amir M; Sawan, Mohamad

    2016-11-01

    This paper reports on the design, implementation, and test of a stimulation back-end, for an implantable retinal prosthesis. In addition to traditional rectangular pulse shapes, the circuit features biphasic stimulation pulses with both rising and falling exponential shapes, whose time constants are digitally programmable. A class-B second generation current conveyor is used as a wide-swing, high-output-resistance stimulation current driver, delivering stimulation current pulses of up to ±96 μA to the target tissue. Duration of the generated current pulses is programmable within the range of 100 μs to 3 ms. Current-mode digital-to-analog converters (DACs) are used to program the amplitudes of the stimulation pulses. Fabricated using the IBM 130 nm process, the circuit consumes 1.5×1.5 mm 2 of silicon area. According to the measurements, the DACs exhibit DNL and INL of 0.23 LSB and 0.364 LSB, respectively. Experimental results indicate that the stimuli generator meets expected requirements when connected to electrode-tissue impedance of as high as 25 k Ω. Maximum power consumption of the proposed design is 3.4 mW when delivering biphasic rectangular pulses to the target load. A charge pump block is in charge of the upconversion of the standard 1.2-V supply voltage to ±3.3V.

  5. GeantV: from CPU to accelerators

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Arora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Sehgal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.

  6. Integrated Decision-Making Tool to Develop Spent Fuel Strategies for Research Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beatty, Randy L; Harrison, Thomas J

    IAEA Member States operating or having previously operated a Research Reactor are responsible for the safe and sustainable management and disposal of associated radioactive waste, including research reactor spent nuclear fuel (RRSNF). This includes the safe disposal of RRSNF or the corresponding equivalent waste returned after spent fuel reprocessing. One key challenge to developing general recommendations lies in the diversity of spent fuel types, locations and national/regional circumstances rather than mass or volume alone. This is especially true given that RRSNF inventories are relatively small, and research reactors are rarely operated at a high power level or duration typical ofmore » commercial power plants. Presently, many countries lack an effective long-term policy for managing RRSNF. This paper presents results of the International Atomic Energy Agency (IAEA) Coordinated Research Project (CRP) #T33001 on Options and Technologies for Managing the Back End of the Research Reactor Nuclear Fuel Cycle which includes an Integrated Decision Making Tool called BRIDE (Back-end Research reactor Integrated Decision Evaluation). This is a multi-attribute decision-making tool that combines the Total Estimated Cost of each life-cycle scenario with Non-economic factors such as public acceptance, technical maturity etc and ranks optional back-end scenarios specific to member states situations in order to develop a specific member state strategic plan with a preferred or recommended option for managing spent fuel from Research Reactors.« less

  7. Genomics Portals: integrative web-platform for mining genomics data.

    PubMed

    Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario

    2010-01-13

    A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.

  8. Genomics Portals: integrative web-platform for mining genomics data

    PubMed Central

    2010-01-01

    Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org. PMID:20070909

  9. The instrument control software package for the Habitable-Zone Planet Finder spectrometer

    NASA Astrophysics Data System (ADS)

    Bender, Chad F.; Robertson, Paul; Stefansson, Gudmundur Kari; Monson, Andrew; Anderson, Tyler; Halverson, Samuel; Hearty, Frederick; Levi, Eric; Mahadevan, Suvrath; Nelson, Matthew; Ramsey, Larry; Roy, Arpita; Schwab, Christian; Shetrone, Matthew; Terrien, Ryan

    2016-08-01

    We describe the Instrument Control Software (ICS) package that we have built for The Habitable-Zone Planet Finder (HPF) spectrometer. The ICS controls and monitors instrument subsystems, facilitates communication with the Hobby-Eberly Telescope facility, and provides user interfaces for observers and telescope operators. The backend is built around the asynchronous network software stack provided by the Python Twisted engine, and is linked to a suite of custom hardware communication protocols. This backend is accessed through Python-based command-line and PyQt graphical frontends. In this paper we describe several of the customized subsystem communication protocols that provide access to and help maintain the hardware systems that comprise HPF, and show how asynchronous communication benefits the numerous hardware components. We also discuss our Detector Control Subsystem, built as a set of custom Python wrappers around a C-library that provides native Linux access to the SIDECAR ASIC and Hawaii-2RG detector system used by HPF. HPF will be one of the first astronomical instruments on sky to utilize this native Linux capability through the SIDECAR Acquisition Module (SAM) electronics. The ICS we have created is very flexible, and we are adapting it for NEID, NASA's Extreme Precision Doppler Spectrometer for the WIYN telescope; we will describe this adaptation, and describe the potential for use in other astronomical instruments.

  10. Financing Long-Term Services And Supports: Options Reflect Trade-Offs For Older Americans And Federal Spending.

    PubMed

    Favreault, Melissa M; Gleckman, Howard; Johnson, Richard W

    2015-12-01

    About half of older Americans will need a high level of assistance with routine activities for a prolonged period of time. This help is commonly referred to as long-term services and supports (LTSS). Under current policies, these individuals will fund roughly half of their paid care out of pocket. Partly as a result of high costs and uncertainty, relatively few people purchase private long-term care insurance or save sufficiently to fully finance LTSS; many will eventually turn to Medicaid for help. To show how policy changes could expand insurance's role in financing these needs, we modeled several new insurance options. Specifically, we looked at a front-end-only benefit that provides coverage relatively early in the period of disability but caps benefits, a back-end benefit with no lifetime limit, and a combined comprehensive benefit. We modeled mandatory and voluntary versions of each option, and subsidized and unsubsidized versions of each voluntary option. We identified important differences among the alternatives, highlighting relevant trade-offs that policy makers can consider in evaluating proposals. If the primary goal is to significantly increase insurance coverage, the mandatory options would be more successful than the voluntary versions. If the major aim is to reduce Medicaid costs, the comprehensive and back-end mandatory options would be most beneficial. Project HOPE—The People-to-People Health Foundation, Inc.

  11. Ada (Trade Name) Compiler Validation Summary Report: Harris Corporation Harris Ada Compiler, Version 1.3 Harris HCX-7.

    DTIC Science & Technology

    1987-06-03

    Harris Corp. Harris Ada Compiler, Ver.1.3 Harris HCX-7 6. PERFORMING ORG. REPORT NUMBER 7 AUTH R(s 8. CONTRACT OR GRANT...VALIDATION SUMMARY REPORT : Harris Corporation Harris Ada Compiler, Version 1.3 Harris HCX-7 Completion of On-Site Testing: 3 June 1987 & .. . 0 Prepared...Place NTIS form here + .. . .. . .. .. Ada’ Compiler Validation Summary Report : Compiler Name: Harris Ada Compiler, Version 1.3 Host: Target: Harris

  12. Introduction of digital soil mapping techniques for the nationwide regionalization of soil condition in Hungary; the first results of the DOSoReMI.hu (Digital, Optimized, Soil Related Maps and Information in Hungary) project

    NASA Astrophysics Data System (ADS)

    Pásztor, László; Laborczi, Annamária; Szatmári, Gábor; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Dobos, Endre

    2014-05-01

    Due to the former soil surveys and mapping activities significant amount of soil information has accumulated in Hungary. Present soil data requirements are mainly fulfilled with these available datasets either by their direct usage or after certain specific and generally fortuitous, thematic and/or spatial inference. Due to the more and more frequently emerging discrepancies between the available and the expected data, there might be notable imperfection as for the accuracy and reliability of the delivered products. With a recently started project (DOSoReMI.hu; Digital, Optimized, Soil Related Maps and Information in Hungary) we would like to significantly extend the potential, how countrywide soil information requirements could be satisfied in Hungary. We started to compile digital soil related maps which fulfil optimally the national and international demands from points of view of thematic, spatial and temporal accuracy. The spatial resolution of the targeted countrywide, digital, thematic maps is at least 1:50.000 (approx. 50-100 meter raster resolution). DOSoReMI.hu results are also planned to contribute to the European part of GSM.net products. In addition to the auxiliary, spatial data themes related to soil forming factors and/or to indicative environmental elements we heavily lean on the various national soil databases. The set of the applied digital soil mapping techniques is gradually broadened incorporating and eventually integrating geostatistical, data mining and GIS tools. In our paper we will present the first results. - Regression kriging (RK) has been used for the spatial inference of certain quantitative data, like particle size distribution components, rootable depth and organic matter content. In the course of RK-based mapping spatially segmented categorical information provided by the SMUs of Digital Kreybig Soil Information System (DKSIS) has been also used in the form of indicator variables. - Classification and regression trees (CART) were used to improve the spatial resolution of category-type soil maps (thematic downscaling), like genetic soil type and soil productivity maps. The approach was justified by the fact that certain thematic soil maps are not available in the required scale. Decision trees were applied for the understanding of the soil-landscape models involved in existing soil maps, and for the post-formalization of survey/compilation rules. The relationships identified and expressed in decision rules made the creation of spatially refined maps possible with the aid of high resolution environmental auxiliary variables. Among these co-variables, a special role was played by larger scale spatial soil information with diverse attributes. As a next step, the testing of random forests for the same purposes has been started. - Due to the simultaneous richness of available Hungarian legacy soil data, spatial inference methods and auxiliary environmental information, there is a high versatility of possible approaches for the compilation of a given soil (related) map. This suggests the opportunity of optimization. For the creation of an object specific soil (related) map with predefined parameters (resolution, accuracy, reliability etc.) one might intend to identify the optimum set of soil data, method and auxiliary co-variables optimized for the resources (data costs, computation requirements etc.). The first findings on the inclusion and joint usage of spatial soil data as well as on the consistency of various evaluations of the result maps will be also presented. Acknowledgement: Our work has been supported by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).

  13. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  14. Optimizing Interactive Development of Data-Intensive Applications

    PubMed Central

    Interlandi, Matteo; Tetali, Sai Deep; Gulzar, Muhammad Ali; Noor, Joseph; Condie, Tyson; Kim, Miryung; Millstein, Todd

    2017-01-01

    Modern Data-Intensive Scalable Computing (DISC) systems are designed to process data through batch jobs that execute programs (e.g., queries) compiled from a high-level language. These programs are often developed interactively by posing ad-hoc queries over the base data until a desired result is generated. We observe that there can be significant overlap in the structure of these queries used to derive the final program. Yet, each successive execution of a slightly modified query is performed anew, which can significantly increase the development cycle. Vega is an Apache Spark framework that we have implemented for optimizing a series of similar Spark programs, likely originating from a development or exploratory data analysis session. Spark developers (e.g., data scientists) can leverage Vega to significantly reduce the amount of time it takes to re-execute a modified Spark program, reducing the overall time to market for their Big Data applications. PMID:28405637

  15. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    NASA Astrophysics Data System (ADS)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  16. Deconvolution Methods and Systems for the Mapping of Acoustic Sources from Phased Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Humphreys, Jr., William M. (Inventor); Brooks, Thomas F. (Inventor)

    2012-01-01

    Mapping coherent/incoherent acoustic sources as determined from a phased microphone array. A linear configuration of equations and unknowns are formed by accounting for a reciprocal influence of one or more cross-beamforming characteristics thereof at varying grid locations among the plurality of grid locations. An equation derived from the linear configuration of equations and unknowns can then be iteratively determined. The equation can be attained by the solution requirement of a constraint equivalent to the physical assumption that the coherent sources have only in phase coherence. The size of the problem may then be reduced using zoning methods. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with a phased microphone array (microphones arranged in an optimized grid pattern including a plurality of grid locations) in order to compile an output presentation thereof, thereby removing beamforming characteristics from the resulting output presentation.

  17. Deconvolution methods and systems for the mapping of acoustic sources from phased microphone arrays

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F. (Inventor); Humphreys, Jr., William M. (Inventor)

    2010-01-01

    A method and system for mapping acoustic sources determined from a phased microphone array. A plurality of microphones are arranged in an optimized grid pattern including a plurality of grid locations thereof. A linear configuration of N equations and N unknowns can be formed by accounting for a reciprocal influence of one or more beamforming characteristics thereof at varying grid locations among the plurality of grid locations. A full-rank equation derived from the linear configuration of N equations and N unknowns can then be iteratively determined. A full-rank can be attained by the solution requirement of the positivity constraint equivalent to the physical assumption of statically independent noise sources at each N location. An optimized noise source distribution is then generated over an identified aeroacoustic source region associated with the phased microphone array in order to compile an output presentation thereof, thereby removing the beamforming characteristics from the resulting output presentation.

  18. Testing-Based Compiler Validation for Synchronous Languages

    NASA Technical Reports Server (NTRS)

    Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier

    2014-01-01

    In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.

  19. User Driven Image Stacking for ODI Data and Beyond via a Highly Customizable Web Interface

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.

    2015-09-01

    While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.

  20. Incorporation of thorium in the rhabdophane structure: Synthesis and characterization of Pr1-2xCaxThxPO4·nH2O solid solutions

    NASA Astrophysics Data System (ADS)

    Qin, Danwen; Mesbah, Adel; Gausse, Clémence; Szenknect, Stéphanie; Dacheux, Nicolas; Clavier, Nicolas

    2017-08-01

    Thorium incorporation in the rhabdophane structure as Pr1-2xCaxThxPO4·nH2O solid solutions was successfully achieved and resulted in the preparation of a low temperature precursor of the monazite-cheralite type Pr1-2xCaxThxPO4. The rhabdophane compounds are considered as potential neoformed phases in case of release of actinides from the phosphate-based ceramic wasteforms envisaged to host radionuclides in the back-end of the nuclear fuel cycle. A multiparametric study was thus undertaken to specify the wet chemistry conditions (starting stoichiometry, temperature, heating time) leading to single phase Pr1-2xCaxThxPO4·nH2O powdered samples. The excess of calcium appeared to be a prevailing factor with a suggested initial Ca:Th ratio being equal to 10. Similarly, the recommended heating time should exceed 4 days while the optimal temperature of synthesis is 110 °C. Under these conditions, the stability domain of Pr1-2xCaxThxPO4·nH2O ranged from x = 0.00 to x = 0.15. After heating at 1100 °C under air during 6 h, rhabdophane-type samples were fully converted into the highly durable Pr1-2xCaxThxPO4 cheralite ceramic wasteform.

  1. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  2. A faster and more reliable data acquisition system for the full performance of the SciCRT

    DOE PAGES

    Sasai, Y.; Matsubara, Y.; Itow, Y.; ...

    2017-01-03

    The SciBar Cosmic Ray Telescope (SciCRT) is a massive scintillator tracker to observe cosmic rays at a very high-altitude environment in Mexico. The fully active tracker is based on the Scintillator Bar (SciBar) detector developed as a near detector for the KEK-to-Kamioka long-baseline neutrino oscillation experiment (K2K) in Japan. Since the data acquisition (DAQ) system was developed for the accelerator experiment, we determined to develop a new robust DAQ system to optimize it to our cosmic-ray experiment needs at the top of Mt. Sierra Negra (4600 m). One of our special requirements is to achieve a 10 times faster readoutmore » rate. We started to develop a new fast readout back-end board (BEB) based on 100 Mbps SiTCP, a hardware network processor developed for DAQ systems for high energy physics experiments. Then we developed the new BEB which has a potential of 20 times faster than the current one in the case of observing neutrons. Lastly, we installed the new DAQ system including the new BEBs to a part of the SciCRT in July 2015. The system has been operating since then. In this article, we describe the development, the basic performance of the new BEB, the status after the installation in the SciCRT, and the future performance.« less

  3. Cross-scale efficient tensor contractions for coupled cluster computations through multiple programming model backends

    DOE PAGES

    Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...

    2017-03-08

    Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less

  4. GETPrime 2.0: gene- and transcript-specific qPCR primers for 13 species including polymorphisms.

    PubMed

    David, Fabrice P A; Rougemont, Jacques; Deplancke, Bart

    2017-01-04

    GETPrime (http://bbcftools.epfl.ch/getprime) is a database with a web frontend providing gene- and transcript-specific, pre-computed qPCR primer pairs. The primers have been optimized for genome-wide specificity and for allowing the selective amplification of one or several splice variants of most known genes. To ease selection, primers have also been ranked according to defined criteria such as genome-wide specificity (with BLAST), amplicon size, and isoform coverage. Here, we report a major upgrade (2.0) of the database: eight new species (yeast, chicken, macaque, chimpanzee, rat, platypus, pufferfish, and Anolis carolinensis) now complement the five already included in the previous version (human, mouse, zebrafish, fly, and worm). Furthermore, the genomic reference has been updated to Ensembl v81 (while keeping earlier versions for backward compatibility) as a result of re-designing the back-end database and automating the import of relevant sections of the Ensembl database in species-independent fashion. This also allowed us to map known polymorphisms to the primers (on average three per primer for human), with the aim of reducing experimental error when targeting specific strains or individuals. Another consequence is that the inclusion of future Ensembl releases and other species has now become a relatively straightforward task. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. The automated data processing architecture for the GPI Exoplanet Survey

    NASA Astrophysics Data System (ADS)

    Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Graham, James R.; Macintosh, Bruce

    2017-09-01

    The Gemini Planet Imager Exoplanet Survey (GPIES) is a multi-year direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the GPIES Data Cruncher, combines multiple data reduction pipelines together to intelligently process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow-up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our data reduction pipelines. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real-time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.

  6. C%2B%2B tensor toolbox user manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plantenga, Todd D.; Kolda, Tamara Gibson

    2012-04-01

    The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.

  7. ACEE composite structures technology

    NASA Technical Reports Server (NTRS)

    Klotzsche, M. (Compiler)

    1984-01-01

    The NASA Aircraft Energy Efficiency (ACEE) Composite Primary Aircraft Structures Program has made significant progress in the development of technology for advanced composites in commercial aircraft. Commercial airframe manufacturers have demonstrated technology readiness and cost effectiveness of advanced composites for secondary and medium primary components and have initiated a concerted program to develop the data base required for efficient application to safety-of-flight wing and fuselage structures. Oral presentations were compiled into five papers. Topics addressed include: damage tolerance and failsafe testing of composite vertical stabilizer; optimization of composite multi-row bolted joints; large wing joint demonstation components; and joints and cutouts in fuselage structure.

  8. Methods for recalibration of mass spectrometry data

    DOEpatents

    Tolmachev, Aleksey V [Richland, WA; Smith, Richard D [Richland, WA

    2009-03-03

    Disclosed are methods for recalibrating mass spectrometry data that provide improvement in both mass accuracy and precision by adjusting for experimental variance in parameters that have a substantial impact on mass measurement accuracy. Optimal coefficients are determined using correlated pairs of mass values compiled by matching sets of measured and putative mass values that minimize overall effective mass error and mass error spread. Coefficients are subsequently used to correct mass values for peaks detected in the measured dataset, providing recalibration thereof. Sub-ppm mass measurement accuracy has been demonstrated on a complex fungal proteome after recalibration, providing improved confidence for peptide identifications.

  9. Ada Compiler Validation Summary Report: Certificate Number 910626S1. 11173 U.S. Navy Ada/L, Version 4.0 (/Optimize) VAX 855 = AN/UYK-43 (EMR) (Bare Board).

    DTIC Science & Technology

    1991-07-30

    Target), 91 0626S1.11173 6. AUTHOR(S) National Institute of Standards and Technology Gaithersburg, MD USA 7 PERFORMING ORGANIZATION NAME(S) AND ADDRESS...Capability (ACVC). This Validation Summary Report ( VSR ) gives an account of the testing of this Ada implementation. For iny technical terms used in...8217 & ’"’ $BLANKS (1..V-20 => ’ $MAXLENINTBASEDLITERAL -Ŗ:" & (I..V-5 => 𔃺’) & ൓ :" $MAXLENREALBASEDLITERAL ൘:" & (1..V- 7 => 𔃺’) & "F.E:" $MAXSTRINGLITERAL

  10. Ada Compiler Validation Summary Report. Certificate Number: 910626S1. 11178, U.S. Navy Ada/M, Version 4.0 (/OPTIMIZE) VAX 11/785 = AN/UYK-44 (EMR) (Bare Board).

    DTIC Science & Technology

    1991-07-30

    Technology Gaithersburg, MD USA 7 . PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION National Institute of Stanaiards and...ACVC). This Validation Summary Report ( VSR ) gives an account of the testing of this Ada implementation. For any technical terms used in this report...8217"’ & (1..V-l-V/2 => ’A’) & ’I’ & ’"’ $BLANKS (l..V-20 => ’ $MAXLENINTBASEDLITERAL Ŗ:" & (1..V-5 => 𔃺’) & ൓:" $MAXLENREALBASEDLITERAL ൘:" & (1..V- 7

  11. Arity Raising in Manticore

    NASA Astrophysics Data System (ADS)

    Bergstrom, Lars; Reppy, John

    Compilers for polymorphic languages are required to treat values in programs in an abstract and generic way at the source level. The challenges of optimizing the boxing of raw values, flattening of argument tuples, and raising the arity of functions that handle complex structures to reduce memory usage are old ones, but take on newfound import with processors that have twice as many registers. We present a novel strategy that uses both control-flow and type information to provide an arity raising implementation addressing these problems. This strategy is conservative - no matter the execution path, the transformed program will not perform extra operations.

  12. Reconfigurable Wave Velocity Transmission Lines for Phased Arrays

    NASA Technical Reports Server (NTRS)

    Host, Nick; Chen, Chi-Chih; Volakis, John L.; Miranda, Felix

    2013-01-01

    Phased array antennas showcase many advantages over mechanically steered systems. However, they are also more complex, heavy and most importantly costly. This presentation paper presents a concept which overcomes these detrimental attributes by eliminating all of the phase array backend (including phase shifters). Instead, a wave velocity reconfigurable transmission line is used in a series fed array arrangement to allow phase shifting with one small (100mil) mechanical motion. Different configurations of the reconfigurable wave velocity transmission line are discussed and simulated and experimental results are presented.

  13. A study of the age attribute in a query tool for a clinical data warehouse.

    PubMed

    Scheufele, Elisabeth L; Scheufele, Elisabeth Lee; Dubey, Anil; Dubey, Anil Kumar; Murphy, Shawn N

    2008-11-06

    The RPDR, a clinical data warehouse with a user-friendly Querytool, allows researchers to perform studies on patient data. Currently, the RPDR represents age as the patient's age at the present time, which is problematic in situations where age at the time of the event is more appropriate. We will modify the Querytool to consider this by assessing the perception of age via survey, testing backend query solutions, and developing modifications based on these results.

  14. Control Infrastructure for a Pulsed Ion Accelerator

    NASA Astrophysics Data System (ADS)

    Persaud, A.; Regis, M. J.; Stettler, M. W.; Vytla, V. K.

    2016-10-01

    We report on updates to the accelerator controls for the Neutralized Drift Compression Experiment II, a pulsed induction-type accelerator for heavy ions. The control infrastructure is built around a LabVIEW interface combined with an Apache Cassandra backend for data archiving. Recent upgrades added the storing and retrieving of device settings into the database, as well as ZeroMQ as a message broker that replaces LabVIEW's shared variables. Converting to ZeroMQ also allows easy access via other programming languages, such as Python.

  15. Control Infrastructure for a Pulsed Ion Accelerator

    DOE PAGES

    Persaud, A.; Regis, M. J.; Stettler, M. W.; ...

    2016-07-27

    We report on updates to the accelerator controls for the Neutralized Drift Compression Experiment II, a pulsed induction-type accelerator for heavy ions. The control infrastructure is built around a LabVIEW interface combined with an Apache Cassandra backend for data archiving. Recent upgrades added the storing and retrieving of device settings into the database, as well as ZeroMQ as a message broker that replaces LabVIEW's shared variables. Converting to ZeroMQ also allows easy access via other programming languages, such as Python.

  16. Common command-and-control user interface for current force UGS

    NASA Astrophysics Data System (ADS)

    Stolovy, Gary H.

    2009-05-01

    The Current Force Unattended Ground Sensors (UGS) comprise the OmniSense, Scorpion, and Silent Watch systems. As deployed by U.S. Army Central Command in 2006, sensor reports from the three systems were integrated into a common Graphical User Interface (GUI), with three separate vendor-specific applications for Command-and-Control (C2) functions. This paper describes the requirements, system architecture, implementation, and testing of an upgrade to the Processing, Exploitation, and Dissemination back-end server to incorporate common remote Command-and-Control capabilities.

  17. Performance Evaluation of a Database System in a Multiple Backend Configurations,

    DTIC Science & Technology

    1984-10-01

    leaving a systemn process , the * internal performance measuremnents of MMSD have been carried out. Mathodo lo.- gies for constructing test databases...access d i rectory data via the AT, EDIT, and CDT. In designing the test database, one of the key concepts is the choice of the directory attributes in...internal timing. These requests are selected since they retrieve the seIaI lest portion of the test database and the processing time for each request is

  18. Design and Analysis of A Multi-Backend Database System for Performance Improvement, Functionality Expansion and Capacity Growth. Part II.

    DTIC Science & Technology

    1981-08-01

    of Transactions ..... . 29 5.5.2 Attached Execution of Transactions ........ ... 29 5.5.3 The Choice of Transaction Execution for Access Control...basic access control mech- anism for statistical security and value-dependent security. In Section 5.5, * we describe the process of execution of ...the process of request execution with access control for in- sert and non-insert requests in MDBS. We recall again (see Chapter 4) that the process

  19. Anbindung des SISIS-SunRise-Bibliothekssystems an das zentrale Identitätsmanagement

    NASA Astrophysics Data System (ADS)

    Ebner, Ralf; Pretz, Edwin

    Wir berichten über Konzepte und Implementierungen zur Datenprovisionierung aus den Personenverwaltungssystemen der Technischen Universität München (TUM) über das zentrale Metadirectory am Leibniz-Rechenzentrum (LRZ) in das SISIS-SunRise-Bibliothekssystem der Universitätsbibliothek der TUM (TUB). Es werden drei Implementierungsvarianten diskutiert, angefangen von der Generierung und Übertragung einfacher CSV-Dateien über ein OpenLDAP-basiertes Konzept als Backend für die SISIS-Datenbank bis zur endgültigen Implementierung mit dem OCLC IDM Connector.

  20. Performance evaluation of multiple (32 channels) sub-nanosecond TDC implemented in low-cost FPGA

    NASA Astrophysics Data System (ADS)

    Lichard, P.; Konstantinou, G.; Villar Vilanueva, A.; Palladino, V.

    2014-03-01

    NA62 experiment Straw tracker frontend board serves as a gas-tight detector cover and integrates two CARIOCA chips, a low cost FPGA (Cyclon III, Altera) and a set of 400Mbit/s links to the backend. The FPGA houses 16 pairs of sub-nanosecond resolution TDCs with derandomizers and an output link serializer. Evaluation methods, including simulations, and performance results of the system in the lab and on a detector prototype are presented.

  1. Using business analytics to improve outcomes.

    PubMed

    Rivera, Jose; Delaney, Stephen

    2015-02-01

    Orlando Health has brought its hospital and physician practice revenue cycle systems into better balance using four sets of customized analytics: Physician performance analytics gauge the total net revenue for every employed physician. Patient-pay analytics provide financial risk scores for all patients on both the hospital and physician practice sides. Revenue management analytics bridge the gap between the back-end central business office and front-end physician practice managers and administrators. Enterprise management analytics allow the hospitals and physician practices to share important information about common patients.

  2. HAL/S-FC compiler system specifications

    NASA Technical Reports Server (NTRS)

    1976-01-01

    This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.

  3. High Level Rule Modeling Language for Airline Crew Pairing

    NASA Astrophysics Data System (ADS)

    Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü

    2011-09-01

    The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.

  4. Parallel tiled Nussinov RNA folding loop nest generated using both dependence graph transitive closure and loop skewing.

    PubMed

    Palkowski, Marek; Bielecki, Wlodzimierz

    2017-06-02

    RNA secondary structure prediction is a compute intensive task that lies at the core of several search algorithms in bioinformatics. Fortunately, the RNA folding approaches, such as the Nussinov base pair maximization, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. Polyhedral compilation techniques have proven to be a powerful tool for optimization of dense array codes. However, classical affine loop nest transformations used with these techniques do not optimize effectively codes of dynamic programming of RNA structure predictions. The purpose of this paper is to present a novel approach allowing for generation of a parallel tiled Nussinov RNA loop nest exposing significantly higher performance than that of known related code. This effect is achieved due to improving code locality and calculation parallelization. In order to improve code locality, we apply our previously published technique of automatic loop nest tiling to all the three loops of the Nussinov loop nest. This approach first forms original rectangular 3D tiles and then corrects them to establish their validity by means of applying the transitive closure of a dependence graph. To produce parallel code, we apply the loop skewing technique to a tiled Nussinov loop nest. The technique is implemented as a part of the publicly available polyhedral source-to-source TRACO compiler. Generated code was run on modern Intel multi-core processors and coprocessors. We present the speed-up factor of generated Nussinov RNA parallel code and demonstrate that it is considerably faster than related codes in which only the two outer loops of the Nussinov loop nest are tiled.

  5. Strengthening Software Authentication with the ROSE Software Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, G

    2006-06-15

    Many recent nonproliferation and arms control software projects include a software authentication regime. These include U.S. Government-sponsored projects both in the United States and in the Russian Federation (RF). This trend toward requiring software authentication is only accelerating. Demonstrating assurance that software performs as expected without hidden ''backdoors'' is crucial to a project's success. In this context, ''authentication'' is defined as determining that a software package performs only its intended purpose and performs said purpose correctly and reliably over the planned duration of an agreement. In addition to visual inspections by knowledgeable computer scientists, automated tools are needed to highlightmore » suspicious code constructs, both to aid visual inspection and to guide program development. While many commercial tools are available for portions of the authentication task, they are proprietary and not extensible. An open-source, extensible tool can be customized to the unique needs of each project (projects can have both common and custom rules to detect flaws and security holes). Any such extensible tool has to be based on a complete language compiler. ROSE is precisely such a compiler infrastructure developed within the Department of Energy (DOE) and targeted at the optimization of scientific applications and user-defined libraries within large-scale applications (typically applications of a million lines of code). ROSE is a robust, source-to-source analysis and optimization infrastructure currently addressing large, million-line DOE applications in C and C++ (handling the full C, C99, C++ languages and with current collaborations to support Fortran90). We propose to extend ROSE to address a number of security-specific requirements, and apply it to software authentication for nonproliferation and arms control projects.« less

  6. How can we Optimize Global Satellite Observations of Glacier Velocity and Elevation Changes?

    NASA Astrophysics Data System (ADS)

    Willis, M. J.; Pritchard, M. E.; Zheng, W.

    2015-12-01

    We have started a global compilation of glacier surface elevation change rates measured by altimeters and differencing of Digital Elevation Models and glacier velocities measured by Synthetic Aperture Radar (SAR) and optical feature tracking as well as from Interferometric SAR (InSAR). Our goal is to compile statistics on recent ice flow velocities and surface elevation change rates near the fronts of all available glaciers using literature and our own data sets of the Russian Arctic, Patagonia, Alaska, Greenland and Antarctica, the Himalayas, and other locations. We quantify the percentage of the glaciers on the planet that can be regarded as fast flowing glaciers, with surface velocities of more than 50 meters per year, while also recording glaciers that have elevation change rates of more than 2 meters per year. We examine whether glaciers have significant interannual variations in velocities, or have accelerated or stagnated where time series of ice motions are available. We use glacier boundaries and identifiers from the Randolph Glacier Inventory. Our survey highlights glaciers that are likely to react quickly to changes in their mass accumulation rates. The study also identifies geographical areas where our knowledge of glacier dynamics remains poor. Our survey helps guide how frequently observations must be made in order to provide quality satellite-derived velocity and ice elevation observations at a variety of glacier thermal regimes, speeds and widths. Our objectives are to determine to what extent the joint NASA and Indian Space Research Organization Synthetic Aperture Radar mission (NISAR) will be able to provide global precision coverage of ice speed changes and to determine how to optimize observations from the global constellation of satellite missions to record important changes to glacier elevations and velocities worldwide.

  7. Executor Framework for DIRAC

    NASA Astrophysics Data System (ADS)

    Casajus Ramo, A.; Graciani Diaz, R.

    2012-12-01

    DIRAC framework for distributed computing has been designed as a group of collaborating components, agents and servers, with persistent database back-end. Components communicate with each other using DISET, an in-house protocol that provides Remote Procedure Call (RPC) and file transfer capabilities. This approach has provided DIRAC with a modular and stable design by enforcing stable interfaces across releases. But it made complicated to scale further with commodity hardware. To further scale DIRAC, components needed to send more queries between them. Using RPC to do so requires a lot of processing power just to handle the secure handshake required to establish the connection. DISET now provides a way to keep stable connections and send and receive queries between components. Only one handshake is required to send and receive any number of queries. Using this new communication mechanism DIRAC now provides a new type of component called Executor. Executors process any task (such as resolving the input data of a job) sent to them by a task dispatcher. This task dispatcher takes care of persisting the state of the tasks to the storage backend and distributing them among all the Executors based on the requirements of each task. In case of a high load, several Executors can be started to process the extra load and stop them once the tasks have been processed. This new approach of handling tasks in DIRAC makes Executors easy to replace and replicate, thus enabling DIRAC to further scale beyond the current approach based on polling agents.

  8. Ultrasound phase rotation beamforming on multi-core DSP.

    PubMed

    Ma, Jieming; Karadayi, Kerem; Ali, Murtaza; Kim, Yongmin

    2014-01-01

    Phase rotation beamforming (PRBF) is a commonly-used digital receive beamforming technique. However, due to its high computational requirement, it has traditionally been supported by hardwired architectures, e.g., application-specific integrated circuits (ASICs) or more recently field-programmable gate arrays (FPGAs). In this study, we investigated the feasibility of supporting software-based PRBF on a multi-core DSP. To alleviate the high computing requirement, the analog front-end (AFE) chips integrating quadrature demodulation in addition to analog-to-digital conversion were defined and used. With these new AFE chips, only delay alignment and phase rotation need to be performed by DSP, substantially reducing the computational load. We implemented the delay alignment and phase rotation modules on a Texas Instruments C6678 DSP with 8 cores. We found it takes 200 μs to beamform 2048 samples from 64 channels using 2 cores. With 4 cores, 20 million samples can be beamformed in one second. Therefore, ADC frequencies up to 40 MHz with 2:1 decimation in AFE chips or up to 20 MHz with no decimation can be supported as long as the ADC-to-DSP I/O requirement can be met. The remaining 4 cores can work on back-end processing tasks and applications, e.g., color Doppler or ultrasound elastography. One DSP being able to handle both beamforming and back-end processing could lead to low-power and low-cost ultrasound machines, benefiting ultrasound imaging in general, particularly portable ultrasound machines. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT.

    PubMed

    Lavassani, Mehrzad; Forsström, Stefan; Jennehag, Ulf; Zhang, Tingting

    2018-05-12

    Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications.

  10. GeantV: From CPU to accelerators

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-01-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also tomore » formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. Lastly, we also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.« less

  11. Combining Fog Computing with Sensor Mote Machine Learning for Industrial IoT

    PubMed Central

    Lavassani, Mehrzad; Jennehag, Ulf; Zhang, Tingting

    2018-01-01

    Digitalization is a global trend becoming ever more important to our connected and sustainable society. This trend also affects industry where the Industrial Internet of Things is an important part, and there is a need to conserve spectrum as well as energy when communicating data to a fog or cloud back-end system. In this paper we investigate the benefits of fog computing by proposing a novel distributed learning model on the sensor device and simulating the data stream in the fog, instead of transmitting all raw sensor values to the cloud back-end. To save energy and to communicate as few packets as possible, the updated parameters of the learned model at the sensor device are communicated in longer time intervals to a fog computing system. The proposed framework is implemented and tested in a real world testbed in order to make quantitative measurements and evaluate the system. Our results show that the proposed model can achieve a 98% decrease in the number of packets sent over the wireless link, and the fog node can still simulate the data stream with an acceptable accuracy of 97%. We also observe an end-to-end delay of 180 ms in our proposed three-layer framework. Hence, the framework shows that a combination of fog and cloud computing with a distributed data modeling at the sensor device for wireless sensor networks can be beneficial for Industrial Internet of Things applications. PMID:29757227

  12. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  13. Exploring NASA OMI Level 2 Data With Visualization

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer; Yang, Wenli; Johnson, James; Zhao, Peisheng; Gerasimov, Irina; Pham, Long; Vicente, Gilberto

    2014-01-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme events (such as volcano eruptions, dust storms,... etc.). Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. Such obstacles may be avoided by allowing users to visualize satellite data as "images", with accurate pixel-level (Level-2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. We present a prototype service from the Goddard Earth Sciences Data and Information Services Center (GES DISC) supporting Aura OMI Level-2 Data with GIS-like capabilities. Functionality includes selecting data sources (e.g., multiple parameters under the same scene, like NO2 and SO2, or the same parameter with different aggregation methods, like NO2 in OMNO2G and OMNO2D products), user-defined area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting, reformatting, and reprojection. The system will allow any user-defined portal interface (front-end) to connect to our backend server with OGC standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls. This back-end service should greatly enhance its expandability to integrate additional outside data/map sources.

  14. Exploring NASA OMI Level 2 Data With Visualization

    NASA Technical Reports Server (NTRS)

    Wei, Jennifer C.; Yang, Wenli; Johnson, James; Zhao, Peisheng; Gerasimov, Irina; Pham, Long; Vincente, Gilbert

    2014-01-01

    Satellite data products are important for a wide variety of applications that can bring far-reaching benefits to the science community and the broader society. These benefits can best be achieved if the satellite data are well utilized and interpreted, such as model inputs from satellite, or extreme events (such as volcano eruptions, dust storms, etc.).Unfortunately, this is not always the case, despite the abundance and relative maturity of numerous satellite data products provided by NASA and other organizations. Such obstacles may be avoided by allowing users to visualize satellite data as images, with accurate pixel-level (Level-2) information, including pixel coverage area delineation and science team recommended quality screening for individual geophysical parameters. We present a prototype service from the Goddard Earth Sciences Data and Information Services Center (GES DISC) supporting Aura OMI Level-2 Data with GIS-like capabilities. Functionality includes selecting data sources (e.g., multiple parameters under the same scene, like NO2 and SO2, or the same parameter with different aggregation methods, like NO2 in OMNO2G and OMNO2D products), user-defined area-of-interest and temporal extents, zooming, panning, overlaying, sliding, and data subsetting, reformatting, and reprojection. The system will allow any user-defined portal interface (front-end) to connect to our backend server with OGC standard-compliant Web Mapping Service (WMS) and Web Coverage Service (WCS) calls. This back-end service should greatly enhance its expandability to integrate additional outside data-map sources.

  15. Security Enhancement Mechanism Based on Contextual Authentication and Role Analysis for 2G-RFID Systems

    PubMed Central

    Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin

    2011-01-01

    The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system. PMID:22163983

  16. Security enhancement mechanism based on contextual authentication and role analysis for 2G-RFID systems.

    PubMed

    Tang, Wan; Chen, Min; Ni, Jin; Yang, Ximin

    2011-01-01

    The traditional Radio Frequency Identification (RFID) system, in which the information maintained in tags is passive and static, has no intelligent decision-making ability to suit application and environment dynamics. The Second-Generation RFID (2G-RFID) system, referred as 2G-RFID-sys, is an evolution of the traditional RFID system to ensure better quality of service in future networks. Due to the openness of the active mobile codes in the 2G-RFID system, the realization of conveying intelligence brings a critical issue: how can we make sure the backend system will interpret and execute mobile codes in the right way without misuse so as to avoid malicious attacks? To address this issue, this paper expands the concept of Role-Based Access Control (RBAC) by introducing context-aware computing, and then designs a secure middleware for backend systems, named Two-Level Security Enhancement Mechanism or 2L-SEM, in order to ensure the usability and validity of the mobile code through contextual authentication and role analysis. According to the given contextual restrictions, 2L-SEM can filtrate the illegal and invalid mobile codes contained in tags. Finally, a reference architecture and its typical application are given to illustrate the implementation of 2L-SEM in a 2G-RFID system, along with the simulation results to evaluate how the proposed mechanism can guarantee secure execution of mobile codes for the system.

  17. NoSQL technologies for the CMS Conditions Database

    NASA Astrophysics Data System (ADS)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  18. Candy from the 47 Tuc shop

    NASA Astrophysics Data System (ADS)

    Barr, Ewan; Possenti, Andrea; Johnston, Simon; Kramer, Michael; Burgay, Marta; Freire, Paulo; Eatough, Ralph; van Straten, Willem; Keane, Evan; Kerr, Matthew; Champion, David; Jameson, Andrew; Ng, Cherry; Tiburzi, Caterina; Flynn, Chris; Caleb, Manisha; Morello, Vincent

    2014-04-01

    Studies of the 23 MSPs in the globular cluster (GC) 47 Tucanae (47 Tuc) have provided a wealth of interesting science. Timing of these MSPs (all with periods between 2 and 8 ms) has enabled us to probe the environment and dynamics of the cluster in impressive detail and has allowed us to put constraints on the masses of several of the pulsars. However, previous studies of the cluster were limited by the hardware used. The AFB provided 1-bit digitisation and relatively coarse 88 us sampling. Through the use of the CASPSR backend, we will observe 47 Tuc and its 23 MSPs with unprecedented time resolution. For timing we will achieve almost two orders of magnitude better time resolution, allowing us to resolve out features in the profiles of several cluster pulsars, greatly improving their timing accuracy. For search purposes the CASPSR backend will provide a coherently dedispersed filterbank file at the cluster's dispersion measure. We will search this data using state-of-the-art GPU acceleration searching codes to hunt for compact binary systems. In tandem with timing and searching, we will perform real-time transient searching to hunt for fast radio bursts (FRBs). Through shadowing with the Molonglo telescope, we will be able to provide the best localisation, spectral index measurement and repetition limits for any detected FRB. This will provide us with much needed information for multi-wavelength follow-up and will give new insight into the origins of these mysterious events.

  19. Candy from the 47 Tuc shop

    NASA Astrophysics Data System (ADS)

    Barr, Ewan; Possenti, Andrea; Johnston, Simon; Kramer, Michael; Burgay, Marta; Freire, Paulo; Eatough, Ralph; van Straten, Willem; Keane, Evan; Kerr, Matthew; Champion, David; Jameson, Andrew; Ng, Cherry; Tiburzi, Caterina; Flynn, Chris; Caleb, Manisha; Morello, Vincent

    2014-10-01

    Studies of the 23 MSPs in the globular cluster (GC) 47 Tucanae (47 Tuc) have provided a wealth of interesting science. Timing of these MSPs (all with periods between 2 and 8 ms) has enabled us to probe the environment and dynamics of the cluster in impressive detail and has allowed us to put constraints on the masses of several of the pulsars. However, previous studies of the cluster were limited by the hardware used. The AFB provided 1-bit digitisation and relatively coarse 88 us sampling. Through the use of the CASPSR backend, we will observe 47 Tuc and its 23 MSPs with unprecedented time resolution. For timing we will achieve almost two orders of magnitude better time resolution, allowing us to resolve out features in the profiles of several cluster pulsars, greatly improving their timing accuracy. For search purposes the CASPSR backend will provide a coherently dedispersed filterbank file at the cluster's dispersion measure. We will search this data using state-of-the-art GPU acceleration searching codes to hunt for compact binary systems. In tandem with timing and searching, we will perform real-time transient searching to hunt for fast radio bursts (FRBs). Through shadowing with the Molonglo telescope, we will be able to provide the best localisation, spectral index measurement and repetition limits for any detected FRB. This will provide us with much needed information for multi-wavelength follow-up and will give new insight into the origins of these mysterious events.

  20. Semantic Repositories for eGovernment Initiatives: Integrating Knowledge and Services

    NASA Astrophysics Data System (ADS)

    Palmonari, Matteo; Viscusi, Gianluigi

    In recent years, public sector investments in eGovernment initiatives have depended on making more reliable existing governmental ICT systems and infrastructures. Furthermore, we assist at a change in the focus of public sector management, from the disaggregation, competition and performance measurements typical of the New Public Management (NPM), to new models of governance, aiming for the reintegration of services under a new perspective in bureaucracy, namely a holistic approach to policy making which exploits the extensive digitalization of administrative operations. In this scenario, major challenges are related to support effective access to information both at the front-end level, by means of highly modular and customizable content provision, and at the back-end level, by means of information integration initiatives. Repositories of information about data and services that exploit semantic models and technologies can support these goals by bridging the gap between the data-level representations and the human-level knowledge involved in accessing information and in searching for services. Moreover, semantic repository technologies can reach a new level of automation for different tasks involved in interoperability programs, both related to data integration techniques and service-oriented computing approaches. In this chapter, we discuss the above topics by referring to techniques and experiences where repositories based on conceptual models and ontologies are used at different levels in eGovernment initiatives: at the back-end level to produce a comprehensive view of the information managed in the public administrations' (PA) information systems, and at the front-end level to support effective service delivery.

Top