Sample records for installation process parallel

  1. Parallel Processing with Digital Signal Processing Hardware and Software

    NASA Technical Reports Server (NTRS)

    Swenson, Cory V.

    1995-01-01

    The assembling and testing of a parallel processing system is described which will allow a user to move a Digital Signal Processing (DSP) application from the design stage to the execution/analysis stage through the use of several software tools and hardware devices. The system will be used to demonstrate the feasibility of the Algorithm To Architecture Mapping Model (ATAMM) dataflow paradigm for static multiprocessor solutions of DSP applications. The individual components comprising the system are described followed by the installation procedure, research topics, and initial program development.

  2. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  3. Numerical modelling of series-parallel cooling systems in power plant

    NASA Astrophysics Data System (ADS)

    Regucki, Paweł; Lewkowicz, Marek; Kucięba, Małgorzata

    2017-11-01

    The paper presents a mathematical model allowing one to study series-parallel hydraulic systems like, e.g., the cooling system of a power boiler's auxiliary devices or a closed cooling system including condensers and cooling towers. The analytical approach is based on a set of non-linear algebraic equations solved using numerical techniques. As a result of the iterative process, a set of volumetric flow rates of water through all the branches of the investigated hydraulic system is obtained. The calculations indicate the influence of changes in the pipeline's geometrical parameters on the total cooling water flow rate in the analysed installation. Such an approach makes it possible to analyse different variants of the modernization of the studied systems, as well as allowing for the indication of its critical elements. Basing on these results, an investor can choose the optimal variant of the reconstruction of the installation from the economic point of view. As examples of such a calculation, two hydraulic installations are described. One is a boiler auxiliary cooling installation including two screw ash coolers. The other is a closed cooling system consisting of cooling towers and condensers.

  4. Six Years of Parallel Computing at NAS (1987 - 1993): What Have we Learned?

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In the fall of 1987 the age of parallelism at NAS began with the installation of a 32K processor CM-2 from Thinking Machines. In 1987 this was described as an "experiment" in parallel processing. In the six years since, NAS acquired a series of parallel machines, and conducted an active research and development effort focused on the use of highly parallel machines for applications in the computational aerosciences. In this time period parallel processing for scientific applications evolved from a fringe research topic into the one of main activities at NAS. In this presentation I will review the history of parallel computing at NAS in the context of the major progress, which has been made in the field in general. I will attempt to summarize the lessons we have learned so far, and the contributions NAS has made to the state of the art. Based on these insights I will comment on the current state of parallel computing (including the HPCC effort) and try to predict some trends for the next six years.

  5. STS-26 Discovery, OV-103, SSME (2019) installed in position number one at KSC

    NASA Image and Video Library

    1988-01-10

    S88-29076 (10 Jan 1988) --- KSC employees work together to carefully guide a 7,000 pound main engine into the number one position in Discovery's aft compartment. Because of the engine's weight and size, special handling equipment is needed to perform the installation. Discovery is currently being prepared for the upcoming STS-26 mission in bay 1 of the Orbiter Processing Facility. This engine, 2019, arrived at KSC on Jan. 6 and was installed Jan. 10. The other two engines are scheduled to be installed later this month. The shuttle's three main liquid fueled engines provide the main propulsion for the orbiter vehicle. The cluster of three engines operate in parallel with the solid rocket boosters during the initial ascent.

  6. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  7. Feasibility study for the implementation of NASTRAN on the ILLIAC 4 parallel processor

    NASA Technical Reports Server (NTRS)

    Field, E. I.

    1975-01-01

    The ILLIAC IV, a fourth generation multiprocessor using parallel processing hardware concepts, is operational at Moffett Field, California. Its capability to excel at matrix manipulation, makes the ILLIAC well suited for performing structural analyses using the finite element displacement method. The feasibility of modifying the NASTRAN (NASA structural analysis) computer program to make effective use of the ILLIAC IV was investigated. The characteristics are summarized of the ILLIAC and the ARPANET, a telecommunications network which spans the continent making the ILLIAC accessible to nearly all major industrial centers in the United States. Two distinct approaches are studied: retaining NASTRAN as it now operates on many of the host computers of the ARPANET to process the input and output while using the ILLIAC only for the major computational tasks, and installing NASTRAN to operate entirely in the ILLIAC environment. Though both alternatives offer similar and significant increases in computational speed over modern third generation processors, the full installation of NASTRAN on the ILLIAC is recommended. Specifications are presented for performing that task with manpower estimates and schedules to correspond.

  8. Simultaneous Range-Velocity Processing and SNR Analysis of AFIT’s Random Noise Radar

    DTIC Science & Technology

    2012-03-22

    reducing the overall processing time. Two computers, equipped with NVIDIA ® GPUs, were used to process the col- 45 lected data. The specifications for each...gather the results back to the CPU. Another company , AccelerEyes®, has developed a product called Jacket® that claims to be better than the parallel...Number of Processing Cores 4 8 Processor Speed 3.33 GHz 3.07 GHz Installed Memory 48 GB 48 GB GPU Make NVIDIA NVIDIA GPU Model Tesla 1060 Tesla C2070 GPU

  9. Feasibility study: Liquid hydrogen plant, 30 tons per day

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The design considerations of the plant are discussed in detail along with management planning, objective schedules, and cost estimates. The processing scheme is aimed at ultimate use of coal as the basic raw material. For back-up, and to provide assurance of a dependable and steady supply of hydrogen, a parallel and redundant facility for gasifying heavy residual oil will be installed. Both the coal and residual oil gasifiers will use the partial oxidation process.

  10. Applications of High Speed Networks

    DTIC Science & Technology

    1991-09-01

    plished in order to achieve a dpgree of parallelism by constructing a distributed switch. The type of switch, self -routing, processes the packet...control more than a dozen missiles in flight, and the four Mark 99 target illuminators direct missiles in the terminal phase. The self -contained Phalanx...military installations, weapon system respose and expected missile performance against a threat. Projects are already underway transposing of

  11. TU-AB-BRC-12: Optimized Parallel MonteCarlo Dose Calculations for Secondary MU Checks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, S; Nazareth, D; Bellor, M

    Purpose: Secondary MU checks are an important tool used during a physics review of a treatment plan. Commercial software packages offer varying degrees of theoretical dose calculation accuracy, depending on the modality involved. Dose calculations of VMAT plans are especially prone to error due to the large approximations involved. Monte Carlo (MC) methods are not commonly used due to their long run times. We investigated two methods to increase the computational efficiency of MC dose simulations with the BEAMnrc code. Distributed computing resources, along with optimized code compilation, will allow for accurate and efficient VMAT dose calculations. Methods: The BEAMnrcmore » package was installed on a high performance computing cluster accessible to our clinic. MATLAB and PYTHON scripts were developed to convert a clinical VMAT DICOM plan into BEAMnrc input files. The BEAMnrc installation was optimized by running the VMAT simulations through profiling tools which indicated the behavior of the constituent routines in the code, e.g. the bremsstrahlung splitting routine, and the specified random number generator. This information aided in determining the most efficient compiling parallel configuration for the specific CPU’s available on our cluster, resulting in the fastest VMAT simulation times. Our method was evaluated with calculations involving 10{sup 8} – 10{sup 9} particle histories which are sufficient to verify patient dose using VMAT. Results: Parallelization allowed the calculation of patient dose on the order of 10 – 15 hours with 100 parallel jobs. Due to the compiler optimization process, further speed increases of 23% were achieved when compared with the open-source compiler BEAMnrc packages. Conclusion: Analysis of the BEAMnrc code allowed us to optimize the compiler configuration for VMAT dose calculations. In future work, the optimized MC code, in conjunction with the parallel processing capabilities of BEAMnrc, will be applied to provide accurate and efficient secondary MU checks.« less

  12. Evaluation of Orientation Performance of Attention Patterns for Blind Person.

    PubMed

    Fujisawa, Shoichiro; Ishibashi, Tatsuki; Sato, Katsuya; Ito, Sin-Ichi; Sueda, Osamu

    2017-01-01

    Tactile walking surface indicators (TWSIs) are installed on footpath to support independent travel for the blind. There are two types of TWSIs, attention patterns and guiding patterns. The attention pattern is usually installed at the crosswalk entrances. The direction of the crossing can be acquired by the row of the projection of the attention pattern through the soles of the shoes. In addition, truncated domes or cones of the attention pattern were arranged in a square grid, parallel or diagonal at 45 degrees to the principal direction of travel. However, the international standard organization (ISO) allows a wide-ranging size. In this research, the direction indicating performance was compared at the same intervals for the five diameters specified by the international standard. As a result of the experiment, the diagonal array does not indicate the direction of travel, but the projection row does indicate the direction of travel in the parallel array. When the attention pattern is installed at a crosswalk entrance, a parallel array should be installed in the direction of the crossing.

  13. Comparison of Origin 2000 and Origin 3000 Using NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Turney, Raymond D.

    2001-01-01

    This report describes results of benchmark tests on the Origin 3000 system currently being installed at the NASA Ames National Advanced Supercomputing facility. This machine will ultimately contain 1024 R14K processors. The first part of the system, installed in November, 2000 and named mendel, is an Origin 3000 with 128 R12K processors. For comparison purposes, the tests were also run on lomax, an Origin 2000 with R12K processors. The BT, LU, and SP application benchmarks in the NAS Parallel Benchmark Suite and the kernel benchmark FT were chosen to determine system performance and measure the impact of changes on the machine as it evolves. Having been written to measure performance on Computational Fluid Dynamics applications, these benchmarks are assumed appropriate to represent the NAS workload. Since the NAS runs both message passing (MPI) and shared-memory, compiler directive type codes, both MPI and OpenMP versions of the benchmarks were used. The MPI versions used were the latest official release of the NAS Parallel Benchmarks, version 2.3. The OpenMP versiqns used were PBN3b2, a beta version that is in the process of being released. NPB 2.3 and PBN 3b2 are technically different benchmarks, and NPB results are not directly comparable to PBN results.

  14. Scalable computing for evolutionary genomics.

    PubMed

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    Genomic data analysis in evolutionary biology is becoming so computationally intensive that analysis of multiple hypotheses and scenarios takes too long on a single desktop computer. In this chapter, we discuss techniques for scaling computations through parallelization of calculations, after giving a quick overview of advanced programming techniques. Unfortunately, parallel programming is difficult and requires special software design. The alternative, especially attractive for legacy software, is to introduce poor man's parallelization by running whole programs in parallel as separate processes, using job schedulers. Such pipelines are often deployed on bioinformatics computer clusters. Recent advances in PC virtualization have made it possible to run a full computer operating system, with all of its installed software, on top of another operating system, inside a "box," or virtual machine (VM). Such a VM can flexibly be deployed on multiple computers, in a local network, e.g., on existing desktop PCs, and even in the Cloud, to create a "virtual" computer cluster. Many bioinformatics applications in evolutionary biology can be run in parallel, running processes in one or more VMs. Here, we show how a ready-made bioinformatics VM image, named BioNode, effectively creates a computing cluster, and pipeline, in a few steps. This allows researchers to scale-up computations from their desktop, using available hardware, anytime it is required. BioNode is based on Debian Linux and can run on networked PCs and in the Cloud. Over 200 bioinformatics and statistical software packages, of interest to evolutionary biology, are included, such as PAML, Muscle, MAFFT, MrBayes, and BLAST. Most of these software packages are maintained through the Debian Med project. In addition, BioNode contains convenient configuration scripts for parallelizing bioinformatics software. Where Debian Med encourages packaging free and open source bioinformatics software through one central project, BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  15. EPA requirements and programs

    NASA Technical Reports Server (NTRS)

    Koutsandreas, J. D.

    1975-01-01

    The proposed ERTS-DCS system is designed to allow EPA the capability to evaluate, through demonstrable hardware, the effectiveness of automated data collection techniques. The total effectiveness of any system is dependent upon many factors which include equipment cost, installation, maintainability, logistic support, growth potential, flexibility and failure rate. This can best be accomplished by installing the system at an operational environmental control agency (CAMP station) to insure that valid data is being obtained and processed. Consequently, it is imperative that the equipment interface must not compromise the validity of the sensor data nor should the experimental system effect the present operations of the CAMP station. Since both the system which is presently in use and the automatic system would be in operation in parallel, conformation and comparison are readily obtained.

  16. Concurrent Cuba

    NASA Astrophysics Data System (ADS)

    Hahn, T.

    2016-10-01

    The parallel version of the multidimensional numerical integration package Cuba is presented and achievable speed-ups discussed. The parallelization is based on the fork/wait POSIX functions, needs no extra software installed, imposes almost no constraints on the integrand function, and works largely automatically.

  17. UFMulti: A new parallel processing software system for HEP

    NASA Astrophysics Data System (ADS)

    Avery, Paul; White, Andrew

    1989-12-01

    UFMulti is a multiprocessing software package designed for general purpose high energy physics applications, including physics and detector simulation, data reduction and DST physics analysis. The system is particularly well suited for installations where several workstation or computers are connected through a local area network (LAN). The initial configuration of the software is currently running on VAX/VMS machines with a planned extension to ULTRIX, using the new RISC CPUs from Digital, in the near future.

  18. LAURA Users Manual: 5.3-48528

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Chirstopher O.; Kleb, Bil

    2010-01-01

    This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  19. LAURA Users Manual: 5.5-64987

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, William L.

    2013-01-01

    This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintain ability by eliminating the requirement for problem dependent recompilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  20. LAURA Users Manual: 5.4-54166

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil

    2011-01-01

    This users manual provides in-depth information concerning installation and execution of Laura, version 5. Laura is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 Laura code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, Laura now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  1. LAURA Users Manual: 5.2-43231

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil

    2009-01-01

    This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multiphysics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  2. Laura Users Manual: 5.1-41601

    NASA Technical Reports Server (NTRS)

    Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil

    2009-01-01

    This users manual provides in-depth information concerning installation and execution of LAURA, version 5. LAURA is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 LAURA code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem-dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multiphysics coupling. As a result, LAURA now shares gas-physics modules, MPI modules, and other low-level modules with the FUN3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.

  3. GRAMM-X public web server for protein–protein docking

    PubMed Central

    Tovchigrechko, Andrey; Vakser, Ilya A.

    2006-01-01

    Protein docking software GRAMM-X and its web interface () extend the original GRAMM Fast Fourier Transformation methodology by employing smoothed potentials, refinement stage, and knowledge-based scoring. The web server frees users from complex installation of database-dependent parallel software and maintaining large hardware resources needed for protein docking simulations. Docking problems submitted to GRAMM-X server are processed by a 320 processor Linux cluster. The server was extensively tested by benchmarking, several months of public use, and participation in the CAPRI server track. PMID:16845016

  4. Large space structures fabrication experiment. [on-orbit fabrication of graphite/thermoplastic beams

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The fabrication machine used for the rolltrusion and on-orbit forming of graphite thermoplastic (CTP) strip material into structural sections is described. The basic process was analytically developed parallel with, and integrated into the conceptual design of, a flight experiment machine for producing a continuous triangular cross section truss. The machine and its associated ancillary equipment are mounted on a Space Lab pallet. Power, thermal control, and instrumentation connections are made during ground installation. Observation, monitoring, caution and warning, and control panels and displays are installed at the payload specialist station in the orbiter. The machine is primed before flight by initiation of beam forming, to include attachment of the first set of cross members and anchoring of the diagonal cords. Control of the experiment will be from the orbiter mission specialist station. Normal operation is by automatic processing control software. Machine operating data are displayed and recorded on the ground. Data is processed and formatted to show progress of the major experiment parameters including stable operation, physical symmetry, joint integrity, and structural properties.

  5. KSC-08pd0431

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- Space shuttle Atlantis is towed into the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. Photo credit: NASA/Jack Pfaller

  6. KSC-08pd0430

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- Space shuttle Atlantis is towed toward the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. Photo credit: NASA/Jack Pfaller

  7. 180 MW/180 KW pulse modulator for S-band klystron of LUE-200 linac of IREN installation of JINR

    NASA Astrophysics Data System (ADS)

    Su, Kim Dong; Sumbaev, A. P.; Shvetsov, V. N.

    2014-09-01

    The offer on working out of the pulse modulator with 180 MW pulse power and 180 kW average power for pulse S-band klystrons of LUE-200 linac of IREN installation at the Laboratory of neutron physics (FLNP) at JINR is formulated. Main requirements, key parameters and element base of the modulator are presented. The variant of the basic scheme on the basis of 14 (or 11) stage 2 parallel PFN with the thyratron switchboard (TGI2-10K/50) and six parallel high voltage power supplies (CCPS Power Supply) is considered.

  8. Management tools for the 21st century environmental office: The role of office automation and information technology. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fittipaldi, J.J.; Sliwinski, B.J.

    1991-06-01

    Army environmental planning and compliance activities continue to grow in magnitude and complexity, straining the resources of installation environmental offices. New efficiencies must be found to meet the increasing demands of planning and compliance imperatives. This study examined how office automation/information technology (OA/IT) may boost productivity in U.S. Army Training and Doctrine Command (TRADOC) installation environmental offices between now and the year 2000. A survey of four TRADOC installation environmental offices revealed that the workload often exceeds the capacity of staff. Computer literacy among personnel varies widely, limiting the benefits available from OA/IT now in use. Since environmental personnel aremore » primarily gatherers and processors of information, better implementation of OA/IT could substantially improve work quality and productivity. Advanced technologies expected to reach the consumer market during the 1990s will dramatically increase the potential productivity of environmental office personnel. Multitasking operating environments will allow simultaneous automation of communications, document processing, and engineering software. Increased processor power and parallel processing techniques will spur simplification of the user interface and greater software capabilities in general. The authors conclude that full implementation of this report's OA/IT recommendations could double TRADOC environmental office productivity by the year 2000.« less

  9. Co-gasification of solid waste and lignite - a case study for Western Macedonia.

    PubMed

    Koukouzas, N; Katsiadakis, A; Karlopoulos, E; Kakaras, E

    2008-01-01

    Co-gasification of solid waste and coal is a very attractive and efficient way of generating power, but also an alternative way, apart from conventional technologies such as incineration and landfill, of treating waste materials. The technology of co-gasification can result in very clean power plants using a wide range of solid fuels but there are considerable economic and environmental challenges. The aim of this study is to present the available existing co-gasification techniques and projects for coal and solid wastes and to investigate the techno-economic feasibility, concerning the installation and operation of a 30MW(e) co-gasification power plant based on integrated gasification combined cycle (IGCC) technology, using lignite and refuse derived fuel (RDF), in the region of Western Macedonia prefecture (WMP), Greece. The gasification block was based on the British Gas-Lurgi (BGL) gasifier, while the gas clean-up block was based on cold gas purification. The competitive advantages of co-gasification systems can be defined both by the fuel feedstock and production flexibility but also by their environmentally sound operation. It also offers the benefit of commercial application of the process by-products, gasification slag and elemental sulphur. Co-gasification of coal and waste can be performed through parallel or direct gasification. Direct gasification constitutes a viable choice for installations with capacities of more than 350MW(e). Parallel gasification, without extensive treatment of produced gas, is recommended for gasifiers of small to medium size installed in regions where coal-fired power plants operate. The preliminary cost estimation indicated that the establishment of an IGCC RDF/lignite plant in the region of WMP is not profitable, due to high specific capital investment and in spite of the lower fuel supply cost. The technology of co-gasification is not mature enough and therefore high capital requirements are needed in order to set up a direct co-gasification plant. The cost of electricity estimated was not competitive, compared to the prices dominating the Greek electricity market and thus further economic evaluation is required. The project would be acceptable if modular construction of the unit was first adopted near operating power plants, based on parallel co-gasification, and gradually incorporating the remaining process steps (gas purification, power generation) with the aim of eventually establishing a true direct co-gasification plant.

  10. Impacts of forest management on runoff and erosion

    Treesearch

    William J. Elliot; Brandon D. Glaza

    2009-01-01

    In a parallel study, ten small watersheds (about 5 ha) were installed in the Priest River Experimental Forest (PREF) in northern Idaho, and another ten were installed in the Boise Basin Experimental Forest (BBEF) in central Idaho. The long-term objective of the study is to compare the effects of different forest management activities on runoff and...

  11. KSC-08pd0428

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- Space shuttle Atlantis is towed along a two-mile tow-way to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  12. DooSo6: Easy Collaboration over Shared Projects

    NASA Astrophysics Data System (ADS)

    Ignat, Claudia-Lavinia; Oster, Gérald; Molli, Pascal

    Existing tools for supporting parallel work feature some disadvantages that prevent them to be widely used. Very often they require a complex installation and creation of accounts for all group members. Users need to learn and deal with complex commands for efficiently using these collaborative tools. Some tools require users to abandon their favourite editors and impose them to use a certain co-authorship application. In this paper, we propose the DooSo6 collaboration tool that offers support for parallel work, requires no installation, no creation of accounts and that is easy to use, users being able to continue working with their favourite editors. User authentication is achieved by means of a capability-based mechanism.

  13. A joint numerical and experimental study of the jet of an aircraft engine installation with advanced techniques

    NASA Astrophysics Data System (ADS)

    Brunet, V.; Molton, P.; Bézard, H.; Deck, S.; Jacquin, L.

    2012-01-01

    This paper describes the results obtained during the European Union JEDI (JEt Development Investigations) project carried out in cooperation between ONERA and Airbus. The aim of these studies was first to acquire a complete database of a modern-type engine jet installation set under a wall-to-wall swept wing in various transonic flow conditions. Interactions between the engine jet, the pylon, and the wing were studied thanks to ¤advanced¥ measurement techniques. In parallel, accurate Reynolds-averaged Navier Stokes (RANS) simulations were carried out from simple ones with the Spalart Allmaras model to more complex ones like the DRSM-SSG (Differential Reynolds Stress Modef of Speziale Sarkar Gatski) turbulence model. In the end, Zonal-Detached Eddy Simulations (Z-DES) were also performed to compare different simulation techniques. All numerical results are accurately validated thanks to the experimental database acquired in parallel. This complete and complex study of modern civil aircraft engine installation allowed many upgrades in understanding and simulation methods to be obtained. Furthermore, a setup for engine jet installation studies has been validated for possible future works in the S3Ch transonic research wind-tunnel. The main conclusions are summed up in this paper.

  14. Fuel cells provide a revenue-generating solution to power quality problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, J.M. Jr.

    Electric power quality and reliability are becoming increasingly important as computers and microprocessors assume a larger role in commercial, health care and industrial buildings and processes. At the same time, constraints on transmission and distribution of power from central stations are making local areas vulnerable to low voltage, load addition limitations, power quality and power reliability problems. Many customers currently utilize some form of premium power in the form of standby generators and/or UPS systems. These include customers where continuous power is required because of health and safety or security reasons (hospitals, nursing homes, places of public assembly, air trafficmore » control, military installations, telecommunications, etc.) These also include customers with industrial or commercial processes which can`t tolerance an interruption of power because of product loss or equipment damage. The paper discusses the use of the PC25 fuel cell power plant for backup and parallel power supplies for critical industrial applications. Several PC25 installations are described: the use of propane in a PC25; the use by rural cooperatives; and a demonstration of PC25 technology using landfill gas.« less

  15. Parallel Work of CO2 Ejectors Installed in a Multi-Ejector Module of Refrigeration System

    NASA Astrophysics Data System (ADS)

    Bodys, Jakub; Palacz, Michal; Haida, Michal; Smolka, Jacek; Nowak, Andrzej J.; Banasiak, Krzysztof; Hafner, Armin

    2016-09-01

    A performance analysis on of fixed ejectors installed in a multi-ejector module in a CO2 refrigeration system is presented in this study. The serial and the parallel work of four fixed-geometry units that compose the multi-ejector pack was carried out. The executed numerical simulations were performed with the use of validated Homogeneous Equilibrium Model (HEM). The computational tool ejectorPL for typical transcritical parameters at the motive nozzle were used in all the tests. A wide range of the operating conditions for supermarket applications in three different European climate zones were taken into consideration. The obtained results present the high and stable performance of all the ejectors in the multi-ejector pack.

  16. Evaluation of selected chemical processes for production of low-cost silicon, phase 3

    NASA Technical Reports Server (NTRS)

    Blocher, J. M., Jr.; Browning, M. F.; Seifert, D. A.

    1981-01-01

    A Process Development Unit (PDU), which consisted of the four major units of the process, was designed, installed, and experimentally operated. The PDU was sized to 50MT/Yr. The deposition took place in a fluidized bed reactor. As a consequences of the experiments, improvements in the design an operation of these units were undertaken and their experimental limitations were partially established. A parallel program of experimental work demonstrated that Zinc can be vaporized for introduction into the fluidized bed reactor, by direct induction-coupled r.f. energy. Residual zinc in the product can be removed by heat treatment below the melting point of silicon. Current efficiencies of 94 percent and above, and power efficiencies around 40 percent are achievable in the laboratory-scale electrolysis of ZnCl2.

  17. KSC-08pd0424

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to theOrbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  18. KSC-08pd0426

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  19. KSC-08pd0429

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed toward the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  20. KSC-08pd0427

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  1. KSC-08pd0425

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- From the Shuttle Landing Facility runway at NASA's Kennedy Space Center, space shuttle Atlantis is towed to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  2. Oahu wind power survey, first report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramage, C.S.; Daniels, P.A.; Schroeder, T.A.

    1977-05-01

    A wind power survey has been conducted on Oahu since summer 1975. At seventeen potentially windy sites, calibrated anemometers and wind vanes were installed and recordings made on computer-processable magnetic tape cassettes. From monthly mean wind speeds--normalized by comparing with Honolulu Airport means winds--it was concluded that about 23 mi/hr represented the highest average annual wind speed likely to be attained on Oahu and that the Koko Head and Kahuku areas gave the most promise for wind energy generation. Diurnal variation of the wind in these areas roughly parallels diurnal variation of electric power demand.

  3. KSC-08pd0423

    NASA Image and Video Library

    2008-02-20

    KENNEDY SPACE CENTER, FLA. -- On the Shuttle Landing Facility runway at NASA's Kennedy Space Center, a tractor tow vehicle is backed up to space shuttle Atlantis for towing to the Orbiter Processing Facility, or OPF, where processing Atlantis for another flight will take place. Towing normally begins within four hours after landing and is completed within six hours unless removal of time-sensitive experiments is required on the runway. In the OPF, turnaround processing procedures on Atlantis will include various post-flight deservicing and maintenance functions, which are carried out in parallel with payload removal and the installation of equipment needed for the next mission. After a round trip of nearly 5.3 million miles, Atlantis and crew returned to Earth with a landing at 9:07 a.m. EST to complete the STS-122 mission. Photo credit: NASA/Jack Pfaller

  4. Method of installing a control room console in a nuclear power plant

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  5. Dismantling of Highly Contaminated Process Installations of the German Reprocessing Facility (WAK) - Status of New Remote Handling Technology - 13287

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dux, Joachim; Friedrich, Daniel; Lutz, Werner

    2013-07-01

    Decommissioning and dismantling of the former German Pilot Reprocessing Plant Karlsruhe (WAK) including the Vitrification Facility (VEK) is being executed in different Project steps related to the reprocessing, HLLW storage and vitrification complexes /1/. While inside the reprocessing building the total inventory of process equipment has already been dismantled and disposed of, the HLLW storage and vitrification complex has been placed out of operation since vitrification and tank rinsing procedures where finalized in year 2010. This paper describes the progress made in dismantling of the shielded boxes of the highly contaminated laboratory as a precondition to get access to themore » hot cells of the HLLW storage. The major challenges of the dismantling of this laboratory were the high dose rates up to 700 mSv/h and the locking technology for the removal of the hot cell installations. In parallel extensive prototype testing of different carrier systems and power manipulators to be applied to dismantle the HLLW-tanks and other hot cell equipment is ongoing. First experiences with the new manipulator carrier system and a new master slave manipulator with force reflection will be reported. (authors)« less

  6. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  7. Selection of the surface water treatment technology - a full-scale technological investigation.

    PubMed

    Pruss, Alina

    2015-01-01

    A technological investigation was carried out over a period of 2 years to evaluate surface water treatment technology. The study was performed in Poland, in three stages. From November 2011 to July 2012, for the first stage, flow tests with a capacity of 0.1-1.5 m³/h were performed simultaneously in three types of technical installations differing by coagulation modules. The outcome of the first stage was the choice of the technology for further investigation. The second stage was performed between September 2012 and March 2013 on a full-scale water treatment plant. Three large technical installations, operated in parallel, were analysed: coagulation with sludge flotation, micro-sand ballasted coagulation with sedimentation, coagulation with sedimentation and sludge recirculation. The capacity of the installations ranged from 10 to 40 m³/h. The third stage was also performed in a full-scale water treatment plant and was aimed at optimising the selected technology. This article presents the results of the second stage of the full-scale investigation. The critical treatment process, for the analysed water, was the coagulation in an acidic environment (6.5 < pH < 7.0) carried out in a system with rapid mixing, a flocculation chamber, preliminary separation of coagulation products, and removal of residual suspended solids through filtration.

  8. Method for six-legged robot stepping on obstacles by indirect force estimation

    NASA Astrophysics Data System (ADS)

    Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun

    2016-07-01

    Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.

  9. Function algorithms for MPP scientific subroutines, volume 1

    NASA Technical Reports Server (NTRS)

    Gouch, J. G.

    1984-01-01

    Design documentation and user documentation for function algorithms for the Massively Parallel Processor (MPP) are presented. The contract specifies development of MPP assembler instructions to perform the following functions: natural logarithm; exponential (e to the x power); square root; sine; cosine; and arctangent. To fulfill the requirements of the contract, parallel array and solar implementations for these functions were developed on the PDP11/34 Program Development and Management Unit (PDMU) that is resident at the MPP testbed installation located at the NASA Goddard facility.

  10. Drywall stilt dermatosis.

    PubMed

    Lewis, E J; Prawer, S E; Crutchfield, C E

    1996-12-01

    We describe a previously unreported occupational dermatosis occurring in a worker employed in drywall installation and finishing. This 50-year-old man presented with bilaterally symmetrical, parallel, linear crusted erosions on his anteromedial legs after wearing drywall stilts. The pathophysiology of this condition is considered.

  11. NEPTUNE Canada Regional Cabled Ocean Observatory: Installed and Online!

    NASA Astrophysics Data System (ADS)

    Barnes, C. R.; Best, M.; Bornhold, B.; Johnson, F.; Phibbs, P.; Pirenne, B.

    2009-12-01

    Through summer 2009, NEPTUNE Canada installed a regional cabled ocean observatory across the northern Juan de Fuca Plate, north-eastern Pacific. This provides continuous power and high bandwidth to collect integrated data on physical, chemical, geological, and biological gradients at temporal resolutions relevant to the dynamics of the earth-ocean system. As the data is freely and openly available through the Internet, this advance opens the ocean to the world. Building this $100M facility required integration of hardware, software, and people networks. Hardware includes: 800km powered fibre-optic backbone cable (installed 2007); development of Nodes and Junction Boxes; acquisition, development of Instruments including mobile platforms a) 400m Vertical Profiler (NGK Ocean) for accessing full upper slope water column, b) a Crawler (Jacobs University, Bremen) to investigate exposed hydrates. In parallel, software and hardware systems are acquiring, archiving, and delivering continuous real-time data. A web environment to combine this data access with analysis and visualization, collaborative tools, interoperability, and instrument control is in place and expanding. A network of scientists, engineers and technicians are contributing to the process in every phase. The currently installed experiments were planned through workshops and international proposal competitions. At inshore Folger Passage (Barkley Sound, west Vancouver Island), understanding controls on biological productivity will evaluate the effects of marine processes on invertebrates, fish and marine mammals. Experiments around Barkley Canyon will quantify changes in biological and chemical activity associated with nutrients and cross-shelf sediment transport at shelf/slope break and through the canyon. Along the mid-continental slope, exposed and shallowly buried hydrates allow monitoring of changes in their distribution, structure, and venting, and relationships to earthquakes, slope failures and plate motions. Circulation obviation retrofit kits (CORKs) at mid-plate ODP 1026-7 wells will monitor real-time changes in crustal temperature and pressure, in response to earthquakes, hydrothermal convection or plate strain. At Endeavour Ridge (instruments installed 2010), complex interactions among volcanic, tectonic, hydrothermal and biological processes will be quantified at western edge of Juan de Fuca plate. Across the network, high resolution seismic information will elucidate tectonic processes and earthquakes, and a tsunami system will determine open ocean tsunami amplitude, propagation direction, and speed. The infrastructure has capacity to expand and we invite participation in experiments, data analysis and technology development; for information and opportunities: http://www.neptunecanada.ca. NEPTUNE Canada will transform our understanding of biological, chemical, physical, and geological processes across an entire tectonic plate from the shelf to the deep sea (17-2700m). Real-time continuous monitoring, archiving, and long time series allow scientists to capture the temporal nature, characteristics, and linkages of these natural processes in ways never before possible.

  12. Light-weight Parallel Python Tools for Earth System Modeling Workflows

    NASA Astrophysics Data System (ADS)

    Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.

    2015-12-01

    With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.

  13. Velocity diagnostics of electron beams within a 140 GHz gyrotron

    NASA Astrophysics Data System (ADS)

    Polevoy, Jeffrey Todd

    1989-06-01

    Experimental measurements of the average axial velocity v(sub parallel) of the electron beam within the M.I.T. 140 GHz MW gyrotron have been performed. The method involves the simultaneous measurement of the radial electrostatic potential of the electron beam V(sub p) and the beam current I(sub b). The V(sub p) is measured through the use of a capacitive probe installed near or within the gyrotron cavity, while I(sub b) is measured with a previously installed Rogowski coil. Three capacitive probes have been designed and built, and two have operated within the gyrotron. The probe results are repeatable and consistent with theory. The measurements of v(sub parallel) and calculations of the corresponding transverse to longitudinal beam velocity ratio (alpha) = v(sub perpendicular)/v(sub parallel) at the cavity have been made at various gyrotron operation parameters. These measurements will provide insight into the causes of discrepancies between theoretical RF interaction efficiencies and experimental efficiencies obtained in experiments with the M.I.T. 140 GHz MW gyrotron. The expected values of v(sub parallel) and (alpha) are determined through the use of a computer code (EGUN) which is used to model the cathode and anode regions of the gyrotron. It also computes the trajectories and velocities of the electrons within the gyrotron. There is good correlation between the expected and measured values of (alpha) at low (alpha), with the expected values from EGUN often falling within the standard errors of the measured values.

  14. Computer vision for driver assistance systems

    NASA Astrophysics Data System (ADS)

    Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner

    1998-07-01

    Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.

  15. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems.

    PubMed

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun; Wang, Gi-Nam

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively.

  16. PLAT: An Automated Fault and Behavioural Anomaly Detection Tool for PLC Controlled Manufacturing Systems

    PubMed Central

    Ghosh, Arup; Qin, Shiming; Lee, Jooyeoun

    2016-01-01

    Operational faults and behavioural anomalies associated with PLC control processes take place often in a manufacturing system. Real time identification of these operational faults and behavioural anomalies is necessary in the manufacturing industry. In this paper, we present an automated tool, called PLC Log-Data Analysis Tool (PLAT) that can detect them by using log-data records of the PLC signals. PLAT automatically creates a nominal model of the PLC control process and employs a novel hash table based indexing and searching scheme to satisfy those purposes. Our experiments show that PLAT is significantly fast, provides real time identification of operational faults and behavioural anomalies, and can execute within a small memory footprint. In addition, PLAT can easily handle a large manufacturing system with a reasonable computing configuration and can be installed in parallel to the data logging system to identify operational faults and behavioural anomalies effectively. PMID:27974882

  17. Meteorological radar services: a brief discussion and a solution in practice

    NASA Astrophysics Data System (ADS)

    Nicolaides, K. A.

    2014-08-01

    The Department of Meteorology is the organization designated by the Civil Aviation Department and by the National Supervisory Authority of the Republic of Cyprus, as an air navigation service provider, based on the regulations of the Single European Sky. Department of Meteorology holds and maintains also an ISO: 9001/2008, Quality System, for the provision of meteorological and climatological services to aeronautic and maritime community, but also to the general public. In order to fulfill its obligations the Department of Meteorology customs the rather dense meteorological stations network, with long historical data series, installed and maintained by the Department, in parallel with modelling and Numerical Weather Prediction (NWP), along with training and gaining of expertise. Among the available instruments in the community of meteorologists is the meteorological radar, a basic tool for the needs of very short/short range forecasting (nowcasting). The Department of Meteorology installed in the mid 90's a C-band radar over «Throni» location and expanded its horizons in nowcasting, aviation safety and warnings issuance. The radar has undergone several upgrades but today technology has over passed its rather old technology. At the present the Department of Meteorology is in the process of buying Meteorological Radar Services as a result of a public procurement procedure. Two networked X-band meteorological radar will be installed (the project now is in the phase of infrastructure establishment while the hardware is in the process of assemble), and maintained from Space Hellas (the contractor) for a 13 years' time period. The present article must be faced as a review article of the efforts of the Department of Meteorology to support its weather forecasters with data from meteorological radar.

  18. [The relation of workspace and installation space of epicyclic kinematics with six degrees of freedom].

    PubMed

    Pott, Peter P; Schwarz, Markus L R

    2007-10-01

    The kinematics of a robotic device significantly determines its installation space when it comes to technical realisation. With regard to the deployment of robotic manipulators in surgery, manipulators with a preferably small installation space are needed. This study describes six versions of novel epicyclic kinematics with six degrees of freedom (DOF). At first, the kinematics functionality was analysed using Gruebler's formula. Subsequently, the quantitative determination of the relation of workspace and installation space was performed using Matlab algorithms. To qualitatively describe the shape of the workspace, the Matlab visualisation features were utilised. For comparison, the well-known Hexapod was used. The assessed kinematics had 6-DOF-functionality. It became apparent that one version of the epicyclic kinematics having two 3-DOF disk systems mounted in a parallel way featured a particularly good relation of workspace and installation space. Compared to the Hexapod, this is approximately four times better. The shape of the workspaces of all epicyclic kinematics assessed was convex and compact. It could be shown that a novel epicyclic kinematics has a notably advantageous relation of workspace and installation space. Apparently, it seems to be well suited for the deployment in robotic machines for surgical procedures.

  19. 78 FR 26350 - Columbia Gas Transmission, LLC; Notice of Intent To Prepare an Environmental Assessment for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-06

    ... installation of a pig launcher \\2\\, a pig receiver, and a mainline valve. According to Columbia, its project is... constructed parallel to an existing pipeline to increase capacity. \\2\\ A ``pig'' is a tool that the pipeline...

  20. Design of robotic cells based on relative handling modules with use of SolidWorks system

    NASA Astrophysics Data System (ADS)

    Gaponenko, E. V.; Anciferov, S. I.

    2018-05-01

    The article presents a diagramed engineering solution for a robotic cell with six degrees of freedom for machining of complex details, consisting of the base with a tool installation module and a detail machining module made as parallel structure mechanisms. The output links of the detail machining module and the tool installation module can move along X-Y-Z coordinate axes each. A 3D-model of the complex is designed in the SolidWorks system. It will be used further for carrying out engineering calculations and mathematical analysis and obtaining all required documentation.

  1. Teaching infant car seat installation via interactive visual presence: An experimental trial.

    PubMed

    Schwebel, David C; Johnston, Anna; Rouse, Jenni

    2017-02-17

    A large portion of child restraint systems (car seats) are installed incorrectly, especially when first-time parents install infant car seats. Expert instruction greatly improves the accuracy of car seat installation but is labor intensive and difficult to obtain for many parents. This study was designed to evaluate the efficacy of 3 ways of communicating instructions for proper car seat installation: phone conversation; HelpLightning, a mobile application (app) that offers virtual interactive presence permitting both verbal and interactive (telestration) visual communication; and the manufacturer's user manual. A sample of 39 young adults of child-bearing age who had no previous experience installing car seats were recruited and randomly assigned to install an infant car seat using guidance from one of those 3 communication sources. Both the phone and interactive app were more effective means to facilitate accurate car seat installation compared to the user manual. There was a trend for the app to offer superior communication compared to the phone, but that difference was not significant in most assessments. The phone and app groups also installed the car seat more efficiently and perceived the communication to be more effective and their installation to be more accurate than those in the user manual group. Interactive communication may help parents install car seats more accurately than using the manufacturer's manual alone. This was an initial study with a modestly sized sample; if results are replicated in future research, there may be reason to consider centralized "call centers" that provide verbal and/or interactive visual instruction from remote locations to parents installing car seats, paralleling the model of centralized Poison Control centers in the United States.

  2. 40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...

  3. 40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...

  4. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  5. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  6. Modeling borehole microseismic and strain signals measured by a distributed fiber optic sensor

    NASA Astrophysics Data System (ADS)

    Mellors, R. J.; Sherman, C. S.; Ryerson, F. J.; Morris, J.; Allen, G. S.; Messerly, M. J.; Carr, T.; Kavousi, P.

    2017-12-01

    The advent of distributed fiber optic sensors installed in boreholes provides a new and data-rich perspective on the subsurface environment. This includes the long-term capability for vertical seismic profiles, monitoring of active borehole processes such as well stimulation, and measuring of microseismic signals. The distributed fiber sensor, which measures strain (or strain-rate), is an active sensor with highest sensitivity parallel to the fiber and subject to varying types of noise, both external and internal. We take a systems approach and include the response of the electronics, fiber/cable, and subsurface to improve interpretation of the signals. This aids in understanding noise sources, assessing error bounds on amplitudes, and developing appropriate algorithms for improving the image. Ultimately, a robust understanding will allow identification of areas for future improvement and possible optimization in fiber and cable design. The subsurface signals are simulated in two ways: 1) a massively parallel multi-physics code that is capable of modeling hydraulic stimulation of heterogeneous reservoir with a pre-existing discrete fracture network, and 2) a parallelized 3D finite difference code for high-frequency seismic signals. Geometry and parameters for the simulations are derived from fiber deployments, including the Marcellus Shale Energy and Environment Laboratory (MSEEL) project in West Virginia. The combination mimics both the low-frequency strain signals generated during the fracture process and high-frequency signals from microseismic and perforation shots. Results are compared with available fiber data and demonstrate that quantitative interpretation of the fiber data provides valuable constraints on the fracture geometry and microseismic activity. These constraints appear difficult, if not impossible, to obtain otherwise.

  7. Conceptual design of a hybrid parallel mechanism for mask exchanging of TMT

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhou, Hongfei; Li, Kexuan; Zhou, Zengxiang; Zhai, Chao

    2015-10-01

    Mask exchange system is an important part of the Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). To solve the problem of stiffness changing with the gravity vector of the mask exchange system in the MOBIE, the hybrid parallel mechanism design method was introduced into the whole research. By using the characteristics of high stiffness and precision of parallel structure, combined with large moving range of serial structure, a conceptual design of a hybrid parallel mask exchange system based on 3-RPS parallel mechanism was presented. According to the position requirements of the MOBIE, the SolidWorks structure model of the hybrid parallel mask exchange robot was established and the appropriate installation position without interfering with the related components and light path in the MOBIE of TMT was analyzed. Simulation results in SolidWorks suggested that 3-RPS parallel platform had good stiffness property in different gravity vector directions. Furthermore, through the research of the mechanism theory, the inverse kinematics solution of the 3-RPS parallel platform was calculated and the mathematical relationship between the attitude angle of moving platform and the angle of ball-hinges on the moving platform was established, in order to analyze the attitude adjustment ability of the hybrid parallel mask exchange robot. The proposed conceptual design has some guiding significance for the design of mask exchange system of the MOBIE on TMT.

  8. Performance Comparison of a Matrix Solver on a Heterogeneous Network Using Two Implementations of MPI: MPICH and LAM

    NASA Technical Reports Server (NTRS)

    Phillips, Jennifer K.

    1995-01-01

    Two of the current and most popular implementations of the Message-Passing Standard, Message Passing Interface (MPI), were contrasted: MPICH by Argonne National Laboratory, and LAM by the Ohio Supercomputer Center at Ohio State University. A parallel skyline matrix solver was adapted to be run in a heterogeneous environment using MPI. The Message-Passing Interface Forum was held in May 1994 which lead to a specification of library functions that implement the message-passing model of parallel communication. LAM, which creates it's own environment, is more robust in a highly heterogeneous network. MPICH uses the environment native to the machine architecture. While neither of these free-ware implementations provides the performance of native message-passing or vendor's implementations, MPICH begins to approach that performance on the SP-2. The machines used in this study were: IBM RS6000, 3 Sun4, SGI, and the IBM SP-2. Each machine is unique and a few machines required specific modifications during the installation. When installed correctly, both implementations worked well with only minor problems.

  9. Passive Active Multi-Junction 3, 7 GHZ launcher for Tore-Supra Long Pulse Experiments. Manufacturing Process and Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guilhem, D.; Achard, J.; Bertrand, B.

    2009-11-26

    The design and the fabrication of a new Lower Hybrid (LH) actively cooled antenna based on the passive active concept is a part of the CIMES project (Components for the Injection of Mater and Energy in Steady-state). The major objectives of Tore-Supra program is to achieve 1000 s pulses with this LH launcher, by coupling routinely >3 MW of LH wave at 3.7 GHz to the plasma with a parallel index n{sub ||} = 1.7 {sup {+-}}{sup 0.2}. The launcher is on its way to achieve its validation tests--low power Radio Frequency (RF) measurements, vacuum and hydraulic leak tests--and willmore » be installed and commissioned on plasma during the fall of 2009.« less

  10. A Scalable Software Architecture Booting and Configuring Nodes in the Whitney Commodity Computing Testbed

    NASA Technical Reports Server (NTRS)

    Fineberg, Samuel A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The Whitney project is integrating commodity off-the-shelf PC hardware and software technology to build a parallel supercomputer with hundreds to thousands of nodes. To build such a system, one must have a scalable software model, and the installation and maintenance of the system software must be completely automated. We describe the design of an architecture for booting, installing, and configuring nodes in such a system with particular consideration given to scalability and ease of maintenance. This system has been implemented on a 40-node prototype of Whitney and is to be used on the 500 processor Whitney system to be built in 1998.

  11. Partitioning in parallel processing of production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oflazer, K.

    1987-01-01

    This thesis presents research on certain issues related to parallel processing of production systems. It first presents a parallel production system interpreter that has been implemented on a four-processor multiprocessor. This parallel interpreter is based on Forgy's OPS5 interpreter and exploits production-level parallelism in production systems. Runs on the multiprocessor system indicate that it is possible to obtain speed-up of around 1.7 in the match computation for certain production systems when productions are split into three sets that are processed in parallel. The next issue addressed is that of partitioning a set of rules to processors in a parallel interpretermore » with production-level parallelism, and the extent of additional improvement in performance. The partitioning problem is formulated and an algorithm for approximate solutions is presented. The thesis next presents a parallel processing scheme for OPS5 production systems that allows some redundancy in the match computation. This redundancy enables the processing of a production to be divided into units of medium granularity each of which can be processed in parallel. Subsequently, a parallel processor architecture for implementing the parallel processing algorithm is presented.« less

  12. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  13. InSAR Deformation Time Series Processed On-Demand in the Cloud

    NASA Astrophysics Data System (ADS)

    Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time series processing in the ASF HyP3 system. Data and process flow from job submission through to order completion will be shown, highlighting the benefits of the cloud for each step.

  14. Analysis of conditions favourable for small vertical axis wind turbines between building passages in urban areas of Sweden

    NASA Astrophysics Data System (ADS)

    Awan, Muhammad Rizwan; Riaz, Fahid; Nabi, Zahid

    2017-05-01

    This paper presents the analysis of installing the vertical axis wind turbines between the building passages on an island in Stockholm, Sweden. Based on the idea of wind speed amplification due to the venture effect in passages, practical measurements were carried out to study the wind profile for a range of passage widths in parallel building passages. Highest increment in wind speed was observed in building passages located on the periphery of sland as wind enters from free field. Wind mapping was performed in the island to choose the most favourable location to install the vertical axis wind turbines (VAWT). Using the annual wind speed data for location and measured amplification factor, energy potential of the street was calculated. This analysis verified that small vertical axis wind turbines can be installed in the passage centre line provided that enough space is provided for traffic and passengers.

  15. Automated installation methods for photovoltaic arrays

    NASA Astrophysics Data System (ADS)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  16. Batteries for autonomous renewable energy systems

    NASA Astrophysics Data System (ADS)

    Sheridan, Norman R.

    Now that the Coconut Island plant has been running successfully for three years, it is appropriate to review the design decisions that were made with regard to the battery and to consider how these might be changed for future systems. The following aspects are discussed: type, package, energy storage, voltage, parallel operation, installation, charging, watering, life and quality assurance.

  17. 75 FR 42820 - Notice of Availability of a Final Environmental Assessment (Final EA) and a Finding of No...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... include: demolition of approximately 6,435 feet of Airport Road; construction of approximately 6,405 feet of relocated Airport Road; installation of ILS components on the north end of Runway 20; construction of access roads and equipment shelter buildings; construction of the parallel taxiway/ramp expansion...

  18. 78 FR 67358 - Columbia Gas Transmission, LLC; Notice of Availability of the Environmental Assessment for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    .... Columbia would also construct miscellaneous aboveground equipment including the installation of a pig launcher,\\2\\ pig receiver, mainline valve, and a gas heater. \\1\\ A pipeline loop is a segment of pipe constructed parallel to an existing pipeline to increase capacity. \\2\\ A ``pig'' is a tool that the pipeline...

  19. The JASMIN Analysis Platform - bridging the gap between traditional climate data practicies and data-centric analysis paradigms

    NASA Astrophysics Data System (ADS)

    Pascoe, Stephen; Iwi, Alan; kershaw, philip; Stephens, Ag; Lawrence, Bryan

    2014-05-01

    The advent of large-scale data and the consequential analysis problems have led to two new challenges for the research community: how to share such data to get the maximum value and how to carry out efficient analysis. Solving both challenges require a form of parallelisation: the first is social parallelisation (involving trust and information sharing), the second data parallelisation (involving new algorithms and tools). The JASMIN infrastructure supports both kinds of parallelism by providing a multi-tennent environment with petabyte-scale storage, VM provisioning and batch cluster facilities. The JASMIN Analysis Platform (JAP) is an analysis software layer for JASMIN which emphasises ease of transition from a researcher's local environment to JASMIN. JAP brings together tools traditionally used by multiple communities and configures them to work together, enabling users to move analysis from their local environment to JASMIN without rewriting code. JAP also provides facilities to exploit JASMIN's parallel capabilities whilst maintaining their familiar analysis environment where ever possible. Modern opensource analysis tools typically have multiple dependent packages, increasing the installation burden on system administrators. When you consider a suite of tools, often with both common and conflicting dependencies, analysis pipelines can become locked to a particular installation simply because of the effort required to reconstruct the dependency tree. JAP addresses this problem by providing a consistent suite of RPMs compatible with RedHat Enterprise Linux and CentOS 6.4. Researchers can install JAP locally, either as RPMs or through a pre-built VM image, giving them the confidence to know moving analysis to JASMIN will not disrupt their environment. Analysis parallelisation is in it's infancy in climate sciences, with few tools capable of exploiting any parallel environment beyond manual scripting of the use of multiple processors. JAP begins to bridge this gap through a veriety of higher-level tools for parallelisation and job scheduling such as IPython-parallel and MPI support for interactive analysis languages. We find that enabling even simple parallelisation of workflows, together with the state of the art I/O performance of JASMIN storage, provides many users with the large increases in efficiency they need to scale their analyses to conteporary data volumes and tackly new, previously inaccessible, problems.

  20. Development and fabrication of the vacuum systems for an elliptically polarized undulator at Taiwan Photon Source

    NASA Astrophysics Data System (ADS)

    Chang, Chin-Chun; Chan, Che-Kai; Wu, Ling-Hui; Shueh, Chin; Shen, I.-Ching; Cheng, Chia-Mu; Yang, I.-Chen

    2017-05-01

    Three sets of a vacuum system were developed and fabricated for elliptically polarized undulators (EPU) of a 3-GeV synchrotron facility. These chambers were shaped with low roughness extrusion and oil-free machining; the design combines aluminium and stainless steel. The use of a bimetallic material to connect the EPU to the vacuum system achieves the vacuum sealing and to resolve the leakage issue due to bake process induced thermal expansion difference. The interior of the EPU chamber consists of a non-evaporable-getter strip pump in a narrow space to absorb photon-stimulated desorption and to provide a RF bridge design to decrease impedance effect in the two ends of EPU chamber. To fabricate these chambers and to evaluate the related performance, we performed a computer simulation to optimize the structure. During the machining and welding, the least deformation was achieved, less than 0.1 mm near 4 m. In the installation, the linear slider can provide a stable and precision moved along parallel the electron beam direction smoothly for the EPU chamber to decrease the twist issue during baking process. The pressure of the EPU chamber attained less than 2×10-8 Pa through baking. These vacuum systems of the EPU magnet have been installed in the electron storage ring of Taiwan Photon Source in 2015 May and have normally operated at 300 mA continuously since, and to keep beam life time achieved over than 12 h.

  1. Scale factor measure method without turntable for angular rate gyroscope

    NASA Astrophysics Data System (ADS)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  2. Fabrication and installation of the Solar Two central receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Litwin, R.Z.; Rogers, R.D.

    The heart of the Solar Two power plant is the molten salt central receiver that has been designed, fabricated, and installed over an 18 month schedule. During this time, the receiver system from Solar One was also completely disassembled and removed. The receiver tower structure, for the most part, was left intact because Solar Two was designed to fit this structure such that construction time and costs could be minimized. In order to meet this aggressive schedule, receiver panel fabrication required the parallel production of many components. The sequence for assembly of the four major receiver panel components (i.e., tubes,more » header assembly, strongback, and header oven covers) and key fabrication activities such as welding are described. Once the receiver panels were complete, their installation at the site was begun, and the order in which receiver system components were installed in the tower is described. The completion of the Solar Two receiver proved the fabricability of this important system. However, successful operation of the system at Solar Two is needed to demonstrate the technical feasibility of the molten salt central receiver concept.« less

  3. The improved broadband Real-Time Seismic Network in Romania

    NASA Astrophysics Data System (ADS)

    Neagoe, C.; Ionescu, C.

    2009-04-01

    Starting with 2002 the National Institute for Earth Physics (NIEP) has developed its real-time digital seismic network. This network consists of 96 seismic stations of which 48 broad band and short period stations and two seismic arrays are transmitted in real-time. The real time seismic stations are equipped with Quanterra Q330 and K2 digitizers, broadband seismometers (STS2, CMG40T, CMG 3ESP, CMG3T) and strong motions sensors Kinemetrics episensors (+/- 2g). SeedLink and AntelopeTM (installed on MARMOT) program packages are used for real-time (RT) data acquisition and exchange. The communication from digital seismic stations to the National Data Center in Bucharest is assured by 5 providers (GPRS, VPN, satellite communication, radio lease line and internet), which will assure the back-up communications lines. The processing centre runs BRTT's AntelopeTM 4.10 data acquisition and processing software on 2 workstations for real-time processing and post processing. The Antelope Real-Time System is also providing automatic event detection, arrival picking, event location and magnitude calculation. It provides graphical display and reporting within near-real-time after a local or regional event occurred. Also at the data center was implemented a system to collect macroseismic information using the internet on which macro seismic intensity maps are generated. In the near future at the data center will be install Seiscomp 3 data acquisition processing software on a workstation. The software will run in parallel with Antelope software as a back-up. The present network will be expanded in the near future. In the first half of 2009 NIEP will install 8 additional broad band stations in Romanian territory, which also will be transmitted to the data center in real time. The Romanian Seismic Network is permanently exchanging real -time waveform data with IRIS, ORFEUS and different European countries through internet. In Romania, magnitude and location of an earthquake are now available within a few minutes after the earthquake occurred. One of the greatest challenges in the near future is to provide shaking intensity maps and other ground motion parameters, within 5 minutes post-event, on the Internet and GIS-based format in order to improve emergency response, public information, preparedness and hazard mitigation

  4. Some thoughts about parallel process and psychotherapy supervision: when is a parallel just a parallel?

    PubMed

    Watkins, C Edward

    2012-09-01

    In a way not done before, Tracey, Bludworth, and Glidden-Tracey ("Are there parallel processes in psychotherapy supervision: An empirical examination," Psychotherapy, 2011, advance online publication, doi.10.1037/a0026246) have shown us that parallel process in psychotherapy supervision can indeed be rigorously and meaningfully researched, and their groundbreaking investigation provides a nice prototype for future supervision studies to emulate. In what follows, I offer a brief complementary comment to Tracey et al., addressing one matter that seems to be a potentially important conceptual and empirical parallel process consideration: When is a parallel just a parallel? PsycINFO Database Record (c) 2012 APA, all rights reserved.

  5. Seeing the forest for the trees: Networked workstations as a parallel processing computer

    NASA Technical Reports Server (NTRS)

    Breen, J. O.; Meleedy, D. M.

    1992-01-01

    Unlike traditional 'serial' processing computers in which one central processing unit performs one instruction at a time, parallel processing computers contain several processing units, thereby, performing several instructions at once. Many of today's fastest supercomputers achieve their speed by employing thousands of processing elements working in parallel. Few institutions can afford these state-of-the-art parallel processors, but many already have the makings of a modest parallel processing system. Workstations on existing high-speed networks can be harnessed as nodes in a parallel processing environment, bringing the benefits of parallel processing to many. While such a system can not rival the industry's latest machines, many common tasks can be accelerated greatly by spreading the processing burden and exploiting idle network resources. We study several aspects of this approach, from algorithms to select nodes to speed gains in specific tasks. With ever-increasing volumes of astronomical data, it becomes all the more necessary to utilize our computing resources fully.

  6. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  7. An experimental study of the noise generating mechanisms in supersonic jets

    NASA Technical Reports Server (NTRS)

    Mclaughlin, D. K.

    1979-01-01

    Flow fluctuation measurements with normal and X-wire hot-wire probes and acoustic measurements with a traversing condenser microphone were carried out in small air jets in the Mach number range from M = 0.9 to 2.5. One of the most successful studies involved a moderate Reynolds number M = 2.1 jet. The large scale turbulence properties in the jet, and the noise radiation were characterized. A parallel study involved similar measurements on a low Reynolds number M = 0.9 jet. These measurements show that there are important differences in the noise generation process of the M = 0.9 jet in comparison with low supersonic Mach number (M = 1.4) jets. Problems encounted while performing X-wire measurements in low Reynolds number jets of M = 2.1 and 2.5, and in installing a vacuum pump are discussed.

  8. The Tera Multithreaded Architecture and Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Bokhari, Shahid H.; Mavriplis, Dimitri J.

    1998-01-01

    The Tera Multithreaded Architecture (MTA) is a new parallel supercomputer currently being installed at San Diego Supercomputing Center (SDSC). This machine has an architecture quite different from contemporary parallel machines. The computational processor is a custom design and the machine uses hardware to support very fine grained multithreading. The main memory is shared, hardware randomized and flat. These features make the machine highly suited to the execution of unstructured mesh problems, which are difficult to parallelize on other architectures. We report the results of a study carried out during July-August 1998 to evaluate the execution of EUL3D, a code that solves the Euler equations on an unstructured mesh, on the 2 processor Tera MTA at SDSC. Our investigation shows that parallelization of an unstructured code is extremely easy on the Tera. We were able to get an existing parallel code (designed for a shared memory machine), running on the Tera by changing only the compiler directives. Furthermore, a serial version of this code was compiled to run in parallel on the Tera by judicious use of directives to invoke the "full/empty" tag bits of the machine to obtain synchronization. This version achieves 212 and 406 Mflop/s on one and two processors respectively, and requires no attention to partitioning or placement of data issues that would be of paramount importance in other parallel architectures.

  9. Precipitation measurements on wind-swept slopes

    Treesearch

    Austin E. Helmers

    1954-01-01

    Precipitation catch for three calendar years is compared for four types of gage installation on a wind-swept south-facing slope with a 22° gradient at elevation 5500 ft. The 1950 precipitation catch by (1) weighing-recording gage with the orifice and an Alter type wind shield sloped parallel to the ground surface, (2) unshielded nonrecording gage with orifice sloped...

  10. The response and recovery of coastal beach-dune systems to storms

    NASA Astrophysics Data System (ADS)

    Farrell, Eugene; Lynch, Kevin; Wilkes Orozco, Sinead; Castro Camba, Guillermo

    2017-04-01

    This two year field monitoring project examines the response and recovery of a coastal beach-dune system in the west coast of Ireland (The Maharees, Co. Kerry) to storms. Historic analyses were completed using maps, aerial photography, and DGPS surveys with the Digital Shoreline Analysis System. The results establish that the average shoreline recession along the 1.2 km site is 72 m during the past 115 years. The coastal monitoring experiment aims to link micro-scale aeolian processes and meso-scale beach-dune behaviour to identify and quantify sediment exchange between the beach and dune under different meteorological and hydrodynamic conditions. Geomorphological changes on the beach and near-shore bar migration were monitored using repeated monthly DGPS surveys and drone technology. Topographical data were correlated with atmospheric data obtained from a locally installed Campbell Scientific automatic weather station, oceanographic data from secondary sources, and photogrammetry using a camera installed at the site collecting pictures every 10 minutes during daylight hours. Changes in surface elevation on the top of the foredune caused by aeolian processes are measured using erosion pin transects. The preliminary results illustrate that natural beach building processes initiate system recovery post storms including elevated foreshores and backshores and nearshore sand bar migration across the entire 1.2 km stretch of coastline. In parallel with the scientific work, the local community have mobilized and are working closely with the lead scientists to implement short term coastal management strategies such as signage, information booklets, sand trap fencing, walkways, wooden revetments, dune planting in order to support the end goal of obtaining financial support from government for a larger, long term coastal protection plan.

  11. The source of dual-task limitations: Serial or parallel processing of multiple response selections?

    PubMed Central

    Marois, René

    2014-01-01

    Although it is generally recognized that the concurrent performance of two tasks incurs costs, the sources of these dual-task costs remain controversial. The serial bottleneck model suggests that serial postponement of task performance in dual-task conditions results from a central stage of response selection that can only process one task at a time. Cognitive-control models, by contrast, propose that multiple response selections can proceed in parallel, but that serial processing of task performance is predominantly adopted because its processing efficiency is higher than that of parallel processing. In the present study, we empirically tested this proposition by examining whether parallel processing would occur when it was more efficient and financially rewarded. The results indicated that even when parallel processing was more efficient and was incentivized by financial reward, participants still failed to process tasks in parallel. We conclude that central information processing is limited by a serial bottleneck. PMID:23864266

  12. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  13. Containment of groundwater contamination plumes: minimizing drawdown by aligning capture wells parallel to regional flow

    NASA Astrophysics Data System (ADS)

    Christ, John A.; Goltz, Mark N.

    2004-01-01

    Pump-and-treat systems that are installed to contain contaminated groundwater migration typically involve placement of extraction wells perpendicular to the regional groundwater flow direction at the down gradient edge of a contaminant plume. These wells capture contaminated water for above ground treatment and disposal, thereby preventing further migration of contaminated water down gradient. In this work, examining two-, three-, and four-well systems, we compare well configurations that are parallel and perpendicular to the regional groundwater flow direction. We show that orienting extraction wells co-linearly, parallel to regional flow, results in (1) a larger area of aquifer influenced by the wells at a given total well flow rate, (2) a center and ultimate capture zone width equal to the perpendicular configuration, and (3) more flexibility with regard to minimizing drawdown. Although not suited for some scenarios, we found orienting extraction wells parallel to regional flow along a plume centerline, when compared to a perpendicular configuration, reduces drawdown by up to 7% and minimizes the fraction of uncontaminated water captured.

  14. 78 FR 53702 - User Fees for Processing Installment Agreements and Offers in Compromise

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-30

    ... User Fees for Processing Installment Agreements and Offers in Compromise AGENCY: Internal Revenue... document contains proposed amendments to the regulations that provide user fees for installment agreements... agencies to prescribe regulations that establish charges for services provided by the agencies (user fees...

  15. P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)

    PubMed Central

    Pillardy, J.

    2007-01-01

    One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.

  16. Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.

    1997-12-01

    Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.

  17. Making a splash!

    PubMed

    Weston, David

    2012-05-01

    The installation of a birthing pool can be a costly and time consuming process. This article provides some practical tips for making the installation run as smoothly as possible, saving work--and money--in the process. This article gives some advice as to what needs to be considered before you go ahead with installing a pool.

  18. The QCDSP project —a status report

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Chen, Ping; Christ, Norman; Edwards, Robert; Fleming, George; Gara, Alan; Hansen, Sten; Jung, Chulwoo; Kaehler, Adrian; Kasow, Steven; Kennedy, Anthony; Kilcup, Gregory; Luo, Yubin; Malureanu, Catalin; Mawhinney, Robert; Parsons, John; Sexton, James; Sui, Chengzhong; Vranas, Pavlos

    1998-01-01

    We give a brief overview of the massively parallel computer project underway for nearly the past four years, centered at Columbia University. A 6 Gflops and a 50 Gflops machine are presently being debugged for installation at OSU and SCRI respectively, while a 0.4 Tflops machine is under construction for Columbia and a 0.6 Tflops machine is planned for the new RIKEN Brookhaven Research Center.

  19. The Goddard Space Flight Center Program to develop parallel image processing systems

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1972-01-01

    Parallel image processing which is defined as image processing where all points of an image are operated upon simultaneously is discussed. Coherent optical, noncoherent optical, and electronic methods are considered parallel image processing techniques.

  20. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  1. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  2. Technique for Determination of Rational Boundaries in Combining Construction and Installation Processes Based on Quantitative Estimation of Technological Connections

    NASA Astrophysics Data System (ADS)

    Gusev, E. V.; Mukhametzyanov, Z. R.; Razyapov, R. V.

    2017-11-01

    The problems of the existing methods for the determination of combining and technologically interlinked construction processes and activities are considered under the modern construction conditions of various facilities. The necessity to identify common parameters that characterize the interaction nature of all the technology-related construction and installation processes and activities is shown. The research of the technologies of construction and installation processes for buildings and structures with the goal of determining a common parameter for evaluating the relationship between technologically interconnected processes and construction works are conducted. The result of this research was to identify the quantitative evaluation of interaction construction and installation processes and activities in a minimum technologically necessary volume of the previous process allowing one to plan and organize the execution of a subsequent technologically interconnected process. The quantitative evaluation is used as the basis for the calculation of the optimum range of the combination of processes and activities. The calculation method is based on the use of the graph theory. The authors applied a generic characterization parameter to reveal the technological links between construction and installation processes, and the proposed technique has adaptive properties which are key for wide use in organizational decisions forming. The article provides a written practical significance of the developed technique.

  3. In-Situ atomic force microscopic observation of ion beam bombarded plant cell envelopes

    NASA Astrophysics Data System (ADS)

    Sangyuenyongpipat, S.; Yu, L. D.; Brown, I. G.; Seprom, C.; Vilaithong, T.

    2007-04-01

    A program in ion beam bioengineering has been established at Chiang Mai University (CMU), Thailand, and ion beam induced transfer of plasmid DNA molecules into bacterial cells (Escherichia coli) has been demonstrated. However, a good understanding of the fundamental physical processes involved is lacking. In parallel work, onion skin cells have been bombarded with Ar+ ions at energy 25 keV and fluence1-2 × 1015 ions/cm2, revealing the formation of microcrater-like structures on the cell wall that could serve as channels for the transfer of large macromolecules into the cell interior. An in-situ atomic force microscope (AFM) system has been designed and installed in the CMU bio-implantation facility as a tool for the observation of these microcraters during ion beam bombardment. Here we describe some of the features of the in-situ AFM and outline some of the related work.

  4. Studies in optical parallel processing. [All optical and electro-optic approaches

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1978-01-01

    Threshold and A/D devices for converting a gray scale image into a binary one were investigated for all-optical and opto-electronic approaches to parallel processing. Integrated optical logic circuits (IOC) and optical parallel logic devices (OPA) were studied as an approach to processing optical binary signals. In the IOC logic scheme, a single row of an optical image is coupled into the IOC substrate at a time through an array of optical fibers. Parallel processing is carried out out, on each image element of these rows, in the IOC substrate and the resulting output exits via a second array of optical fibers. The OPAL system for parallel processing which uses a Fabry-Perot interferometer for image thresholding and analog-to-digital conversion, achieves a higher degree of parallel processing than is possible with IOC.

  5. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  6. Byers Auto Group: A Case Study Into The Economics, Zoning, and Overall Process of Installing Small Wind Turbines at Two Automotive Dealerships in Ohio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oteri, F.; Sinclair, K.

    This paper provides the talking points about a case study on the installation of a $600,000 small wind project, the installation process, estimated annual energy production and percentage of energy needs met by the turbines.

  7. Byers Auto Group: A Case Study Into The Economics, Zoning, and Overall Process of Installing Small Wind Turbines at Two Automotive Dealerships in Ohio (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinclair, K.; Oteri, F.

    This presentation provides the talking points about a case study on the installation of a $600,000 small wind project, the installation process, estimated annual energy production and percentage of energy needs met by the turbines.

  8. Geometric parameters determination of the installation for oil-contaminated soils decontamination in Russia, the Siberian region and the Arctic zones climatic conditions with reagent encapsulating

    NASA Astrophysics Data System (ADS)

    Shtripling, L. O.; Kholkin, E. G.

    2018-01-01

    The article presents the procedure for determining the basic geometrical setting parameters for the oil-contaminated soils decontamination with reagent encapsulation method. An installation is considered for the operational elimination of the emergency consequences accompanied with oil spills, and the installation is adapted to winter conditions. In the installations exothermic process thermal energy of chemical neutralization of oil-contaminated soils released during the decontamination is used to thaw frozen subsequent portions of oil-contaminated soil. Installation for oil-contaminated soil decontamination as compared with other units has an important advantage, and it is, if necessary (e.g., in winter) in using the heat energy released at each decontamination process stage of oil-contaminated soil, in normal conditions the heat is dispersed into the environment. In addition, the short-term forced carbon dioxide delivery at the decontamination process final stage to a high concentration directly into the installation allows replacing the long process of microcapsule shells formation and hardening that occur in natural conditions in the open air.

  9. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  10. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    NASA Astrophysics Data System (ADS)

    Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.

    2014-06-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  11. MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling.

    PubMed

    Piro, Vitor C; Matschkowski, Marcel; Renard, Bernhard Y

    2017-08-14

    Many metagenome analysis tools are presently available to classify sequences and profile environmental samples. In particular, taxonomic profiling and binning methods are commonly used for such tasks. Tools available among these two categories make use of several techniques, e.g., read mapping, k-mer alignment, and composition analysis. Variations on the construction of the corresponding reference sequence databases are also common. In addition, different tools provide good results in different datasets and configurations. All this variation creates a complicated scenario to researchers to decide which methods to use. Installation, configuration and execution can also be difficult especially when dealing with multiple datasets and tools. We propose MetaMeta: a pipeline to execute and integrate results from metagenome analysis tools. MetaMeta provides an easy workflow to run multiple tools with multiple samples, producing a single enhanced output profile for each sample. MetaMeta includes a database generation, pre-processing, execution, and integration steps, allowing easy execution and parallelization. The integration relies on the co-occurrence of organisms from different methods as the main feature to improve community profiling while accounting for differences in their databases. In a controlled case with simulated and real data, we show that the integrated profiles of MetaMeta overcome the best single profile. Using the same input data, it provides more sensitive and reliable results with the presence of each organism being supported by several methods. MetaMeta uses Snakemake and has six pre-configured tools, all available at BioConda channel for easy installation (conda install -c bioconda metameta). The MetaMeta pipeline is open-source and can be downloaded at: https://gitlab.com/rki_bioinformatics .

  12. Efficient multitasking: parallel versus serial processing of multiple tasks

    PubMed Central

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling. PMID:26441742

  13. Efficient multitasking: parallel versus serial processing of multiple tasks.

    PubMed

    Fischer, Rico; Plessow, Franziska

    2015-01-01

    In the context of performance optimizations in multitasking, a central debate has unfolded in multitasking research around whether cognitive processes related to different tasks proceed only sequentially (one at a time), or can operate in parallel (simultaneously). This review features a discussion of theoretical considerations and empirical evidence regarding parallel versus serial task processing in multitasking. In addition, we highlight how methodological differences and theoretical conceptions determine the extent to which parallel processing in multitasking can be detected, to guide their employment in future research. Parallel and serial processing of multiple tasks are not mutually exclusive. Therefore, questions focusing exclusively on either task-processing mode are too simplified. We review empirical evidence and demonstrate that shifting between more parallel and more serial task processing critically depends on the conditions under which multiple tasks are performed. We conclude that efficient multitasking is reflected by the ability of individuals to adjust multitasking performance to environmental demands by flexibly shifting between different processing strategies of multiple task-component scheduling.

  14. High stability integrated Tri-axial fluxgate sensor with suspended technology

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Teng, Yuntian; Wang, Xiaomei; Fan, Xiaoyong; Wu, Qiong

    2017-04-01

    The relative geomagnetic record of China Geomagnetic Network of China(GNC) has been digitized, network, meanwhile achieving second data acquisition and storage during after 9th five-year and 10th five-year plan upgraded. Currently the relative record in geomagnetic observatories are generally two sets of the same type instrument with parallel observation, which could distinguish the differential between observation instrument failures and environmental interference, and ensure the continuity and integrity of the observation data. Fluxgate magnetometer has become mainstream equipment for relative geomagnetic record because of its low noise, high sensitivity, and fast response. There is a problem about data inconsistency by the same type of instrument in the same station though few years observation data analysis. The researchers have done a lot of experiments and found three main error sources:1. The instrument performances, due to the limitation of manufacturing and assembly process level it is difficult to ensure the orthogonality of the instrument; other performances of scale, zero offset and temperature coefficient; 2. horizontal error, which introduced by the initial installation process due to horizontal adjustment and pillar tilling due to long-term observations; 3.The observation environment, the temperature and humidity, power supply system. The new fluxgate magnetometer uses special nonmagnetic gimbaled (made by beryllium / bronze material) construction for suspension, so the fluxgate sensor is fixed at the suspended platform in order to automatically keep the horizontal level. The advantage of this design is to eliminate horizontal error introduced by the initial installation process due to horizontal adjustment and pillar tilling due to long-term observations. The signal processing circuit board is fixed on the top of the suspended platform with certain distance to ensure the static and dynamic magnetic field produced by circuit board no effect to the sensor, so we could get flexible instrument due to signal attenuation resulting signal transmission cable limited length.

  15. User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.

    2000-01-01

    PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.

  16. Tentacle: distributed quantification of genes in metagenomes.

    PubMed

    Boulund, Fredrik; Sjögren, Anders; Kristiansson, Erik

    2015-01-01

    In metagenomics, microbial communities are sequenced at increasingly high resolution, generating datasets with billions of DNA fragments. Novel methods that can efficiently process the growing volumes of sequence data are necessary for the accurate analysis and interpretation of existing and upcoming metagenomes. Here we present Tentacle, which is a novel framework that uses distributed computational resources for gene quantification in metagenomes. Tentacle is implemented using a dynamic master-worker approach in which DNA fragments are streamed via a network and processed in parallel on worker nodes. Tentacle is modular, extensible, and comes with support for six commonly used sequence aligners. It is easy to adapt Tentacle to different applications in metagenomics and easy to integrate into existing workflows. Evaluations show that Tentacle scales very well with increasing computing resources. We illustrate the versatility of Tentacle on three different use cases. Tentacle is written for Linux in Python 2.7 and is published as open source under the GNU General Public License (v3). Documentation, tutorials, installation instructions, and the source code are freely available online at: http://bioinformatics.math.chalmers.se/tentacle.

  17. A small-angle x-ray scattering system with a vertical layout.

    PubMed

    Wang, Zhen; Chen, Xiaowei; Meng, Lingpu; Cui, Kunpeng; Wu, Lihui; Li, Liangbin

    2014-12-01

    A small-angle x-ray scattering (SAXS) system with a vertical layout (V-SAXS) has been designed and constructed for in situ detection on nanostructures, which is well suitable for in situ study on self-assembly of nanoparticles at liquid interface and polymer processing. A steel-tower frame on a reinforced basement is built as the supporting skeleton for scattering beam path and detector platform, ensuring the system a high working stability and a high operating accuracy. A micro-focus x-ray source combining parabolic three-dimensional multi-layer mirror and scatteringless collimation system provides a highly parallel beam, which allows us to detect the very small angle range. With a sample-to-detector distance of 7 m, the largest measurable length scale is 420 nm in real space. With a large sample zone, it is possible to install different experimental setups such as film stretching machine, which makes the system perfect to follow the microstructures evolution of materials during processing. The capability of the V-SAXS on in situ study is tested with a drying experiment of a free latex droplet, which confirms our initial design.

  18. A Microarray Tool Provides Pathway and GO Term Analysis.

    PubMed

    Koch, Martin; Royer, Hans-Dieter; Wiese, Michael

    2011-12-01

    Analysis of gene expression profiles is no longer exclusively a task for bioinformatic experts. However, gaining statistically significant results is challenging and requires both biological knowledge and computational know-how. Here we present a novel, user-friendly microarray reporting tool called maRt. The software provides access to bioinformatic resources, like gene ontology terms and biological pathways by use of the DAVID and the BioMart web-service. Results are summarized in structured HTML reports, each presenting a different layer of information. In these report, contents of diverse sources are integrated and interlinked. To speed up processing, maRt takes advantage of the multi-core technology of modern desktop computers by using parallel processing. Since the software is built upon a RCP infrastructure it might be an outset for developers aiming to integrate novel R based applications. Installer, documentation and various kinds of tutorials are available under LGPL license at the website of our institute http://www.pharma.uni-bonn.de/www/mart. This software is free for academic use. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Real-time digital data-acquisition system for determining load characteristics. Volume 2: Operating, programming and maintenance instructions

    NASA Astrophysics Data System (ADS)

    Podesto, B.; Lapointe, A.; Larose, G.; Robichaud, Y.; Vaillancourt, C.

    1981-03-01

    The design and construction of a Real-Time Digital Data Acquisition System (RTDDAS) to be used in substations for on-site recording and preprocessing load response data were included. The gathered data can be partially processed on site to compute the apparent, active and reactive powers, voltage and current rms values, and instantaneous values of phase voltages and currents. On-site processing capability is provided for rapid monitoring of the field data to ensure that the test setup is suitable. Production analysis of field data is accomplished off-line on a central computer from data recorded on a dual-density (800/1600) magnetic tape which is IBM-compatible. Parallel channels of data can be recorded at a variable rate from 480 to 9000 samples per second per channel. The RTDDAS is housed in a 9.1 m (30-ft) trailer which is shielded from electromagnetic interference and protected by isolators from switching surges. The test must sometimes be performed. Information pertaining to the installation, software operation, and maintenance is presented.

  20. First Observations with the New Dual Sphere Superconducting Gravimeter Osg-073 at Metsähovi, Finland

    NASA Astrophysics Data System (ADS)

    Virtanen, H.; Raja-Halli, A.; Bilker-Koivula, M.; Naranen, J.; Ruotsalainen, H. E. O.

    2014-12-01

    The new dual sphere superconducting gravimeter (SG) OSG-073 was installed in the Metsähovi Geodetic Observatory in February 2014. Its two gravity sensors are side by side, not one on top of another as in most earlier dual sensor installations. One sensor is the standard iGrav™ SG, with a lightweight sphere (5 grams) which is nearly drift-free. The second sensor uses a heavy 20-gram sphere which gives ultra low noise and a much higher quality factor Q. We present time domain observations of the first months, and estimate drift rates after the initial exponential drift. We have determined the transfer functions. Calibration factors were obtained using parallel registrations with the FG5X-221 absolute gravimeter of the FGI. We show selected free oscillation spectra from the SG, and seismic data obtained at Metsähovi with the Nanometrics Trillium 120P broadband seismometer of the Institute of Seismology (University of Helsinki). The noise level of the data is then compared with the New Low Noise Model NLNM. The results with the dual sphere SG can be compared with parallel observations with the SG T020. This 20-year old instrument is situated in the same room at a distance of 2 metres from the dual-sphere SG.

  1. Parallel Vortex Body Interaction Enabled by Active Flow Control

    NASA Astrophysics Data System (ADS)

    Weingaertner, Andre; Tewes, Philipp; Little, Jesse

    2017-11-01

    An experimental study was conducted to explore the flow physics of parallel vortex body interaction between two NACA 0012 airfoils. Experiments were carried out at chord Reynolds numbers of 740,000. Initially, the leading airfoil was characterized without the target one being installed. Results are in good agreement with thin airfoil theory and data provided in the literature. Afterward, the leading airfoil was fixed at 18° incidence and the target airfoil was installed 6 chord lengths downstream. Plasma actuation (ns-DBD), originating close to the leading edge, was used to control vortex shedding from the leading airfoil at various frequencies (0.04

  2. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  3. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  4. Study on Parallel 2-DOF Rotation Machanism in Radar

    NASA Astrophysics Data System (ADS)

    Jiang, Ming; Hu, Xuelong; Liu, Lei; Yu, Yunfei

    The spherical parallel machine has become the world's academic and industrial focus of the field in recent years due to its simple and economical manufacture as well as its structural compactness especially suitable for areas where space gesture changes. This paper dwells upon its present research and development home and abroad. The newer machine (RGRR-II) can rotate around the axis z within 360° and the axis y1 from -90° to +90°. It has the advantages such as less moving parts (only 3 parts), larger ratio of work space to machine size, zero mechanic coupling, no singularity. Constructing rotation machine with spherical parallel 2-DOF rotation join (RGRR-II) may realize semispherical movement with zero dead point and extent the range. Control card (PA8000NT Series CNC) is installed in the computer. The card can run the corresponding software which realizes radar movement control. The machine meets the need of radars in plane and satellite which require larger detection range, lighter weight and compacter structure.

  5. Design and development of split-parallel through-the road retrofit hybrid electric vehicle with in-wheel motors

    NASA Astrophysics Data System (ADS)

    Zulkifli, S. A.; Syaifuddin Mohd, M.; Maharun, M.; Bakar, N. S. A.; Idris, S.; Samsudin, S. H.; Firmansyah; Adz, J. J.; Misbahulmunir, M.; Abidin, E. Z. Z.; Syafiq Mohd, M.; Saad, N.; Aziz, A. R. A.

    2015-12-01

    One configuration of the hybrid electric vehicle (HEV) is the split-axle parallel hybrid, in which an internal combustion engine (ICE) and an electric motor provide propulsion power to different axles. A particular sub-type of the split-parallel hybrid does not have the electric motor installed on board the vehicle; instead, two electric motors are placed in the hubs of the non-driven wheels, called ‘hub motor’ or ‘in-wheel motor’ (IWM). Since propulsion power from the ICE and IWM is coupled through the vehicle itself, its wheels and the road on which it moves, this particular configuration is termed ‘through-the-road’ (TTR) hybrid. TTR configuration enables existing ICE-powered vehicles to be retrofitted into an HEV with minimal physical modification. This work describes design of a retrofit- conversion TTR-IWM hybrid vehicle - its sub-systems and development work. Operating modes and power flow of the TTR hybrid, its torque coupling and resultant traction profiles are initially discussed.

  6. Simulated potential for enhanced performance of mechanically stacked hybrid III–V/Si tandem photovoltaic modules using DC–DC converters

    DOE PAGES

    MacAlpine, Sara; Bobela, David C.; Kurtz, Sarah; ...

    2017-10-01

    This work examines a tandem module design with GaInP2 mechanically stacked on top of crystalline Si, using a detailed photovoltaic (PV) system model to simulate four-terminal (4T) unconstrained and two-terminal voltage-matched (2T VM) parallel architectures. Module-level power electronics is proposed for the 2T VM module design to enhance its performance over the breadth of temperatures experienced by a typical PV installation. Annual, hourly simulations of various scenarios indicate that this design can reduce annual energy losses to ~0.5% relative to the 4T module configuration. Consideration is given to both performance and practical design for building or ground mount installations, emphasizingmore » compatibility with existing standard Si modules.« less

  7. Simulated potential for enhanced performance of mechanically stacked hybrid III–V/Si tandem photovoltaic modules using DC–DC converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacAlpine, Sara; Bobela, David C.; Kurtz, Sarah

    This work examines a tandem module design with GaInP2 mechanically stacked on top of crystalline Si, using a detailed photovoltaic (PV) system model to simulate four-terminal (4T) unconstrained and two-terminal voltage-matched (2T VM) parallel architectures. Module-level power electronics is proposed for the 2T VM module design to enhance its performance over the breadth of temperatures experienced by a typical PV installation. Annual, hourly simulations of various scenarios indicate that this design can reduce annual energy losses to ~0.5% relative to the 4T module configuration. Consideration is given to both performance and practical design for building or ground mount installations, emphasizingmore » compatibility with existing standard Si modules.« less

  8. Experimental investigation of the ORC system in a cogenerative domestic power plant with a scroll expanders

    NASA Astrophysics Data System (ADS)

    Kaczmarczyk, Tomasz Z.; Ihnatowicz, Eugeniusz; Żywica, Grzegorz; Kiciński, Jan

    2015-11-01

    The paper presents the results of experimental investigations of the ORC system with two scroll expanders which have been used as a source of electricity. Theworking fluidwas HFE7100 - a newly engineered fluid with a unique heat transfer and favourable environmental properties. In the ORC system three heat exchangers were used (evaporator, regenerator, condenser) and before expanders the droplet separator was installed. As a source of heat an innovative biomass boiler was used. Studies have been carried out for the expanders worked in series and in parallel. The paper presents the thermal and fluidflow properties of the ORC installation for the selected flow rates and different temperatures of the working medium. The characteristics of output electrical power, operating speed and vibrations for scroll expanders were also presented.

  9. Simulated potential for enhanced performance of mechanically stacked hybrid III-V/Si tandem photovoltaic modules using DC-DC converters

    NASA Astrophysics Data System (ADS)

    MacAlpine, Sara; Bobela, David C.; Kurtz, Sarah; Lumb, Matthew P.; Schmieder, Kenneth J.; Moore, James E.; Walters, Robert J.; Alberi, Kirstin

    2017-10-01

    This work examines a tandem module design with GaInP2 mechanically stacked on top of crystalline Si, using a detailed photovoltaic (PV) system model to simulate four-terminal (4T) unconstrained and two-terminal voltage-matched (2T VM) parallel architectures. Module-level power electronics is proposed for the 2T VM module design to enhance its performance over the breadth of temperatures experienced by a typical PV installation. Annual, hourly simulations of various scenarios indicate that this design can reduce annual energy losses to ˜0.5% relative to the 4T module configuration. Consideration is given to both performance and practical design for building or ground mount installations, emphasizing compatibility with existing standard Si modules.

  10. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  11. Selection Process for New Windows | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  12. Selection Process for Replacement Windows | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  13. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    PubMed

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2004-12-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  15. Anatomically constrained neural network models for the categorization of facial expression

    NASA Astrophysics Data System (ADS)

    McMenamin, Brenton W.; Assadi, Amir H.

    2005-01-01

    The ability to recognize facial expression in humans is performed with the amygdala which uses parallel processing streams to identify the expressions quickly and accurately. Additionally, it is possible that a feedback mechanism may play a role in this process as well. Implementing a model with similar parallel structure and feedback mechanisms could be used to improve current facial recognition algorithms for which varied expressions are a source for error. An anatomically constrained artificial neural-network model was created that uses this parallel processing architecture and feedback to categorize facial expressions. The presence of a feedback mechanism was not found to significantly improve performance for models with parallel architecture. However the use of parallel processing streams significantly improved accuracy over a similar network that did not have parallel architecture. Further investigation is necessary to determine the benefits of using parallel streams and feedback mechanisms in more advanced object recognition tasks.

  16. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  17. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  18. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  19. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  20. Strain gage installation and survivability on geosynthetics used in flexible pavements

    NASA Astrophysics Data System (ADS)

    Brooks, Jeremy A.

    The use of foil type strain gages on geosynthetics is poorly documented. In addition, very few individuals are versed in proper installation techniques or calibration methods. Due to the limited number of knowledgeable technicians there is no information regarding the susceptibility of theses gages to errors in installation by inexperienced installers. Also lacking in the documentation related to the use of foil type strain gages on geosynthetics is the survivability of the gages in field conditions. This research documented procedures for installation, calibration, and survivability used by the project team to instruments a full scale field installation in Marked Tree, AR. This research also addressed sensitivity to installation errors on both geotextile and geogrid. To document the process of gage installation an experienced technician, Mr. Joe Ables, formerly of the UASCE Waterways Experiment Station, was consulted. His techniques were combined with those discovered in related literature and those developed by the research team to develop processes that were adaptable to multiple gage geometries and parent geosynthetics. These processes were described and documented in a step by step manner with accompanying photographs, which should allow virtually anyone with basic electronics knowledge to install these gages properly. Calibration of the various geosynthetic / strain gage combinations was completed using wide width tensile testing on multiple samples of each material. The tensile testing process was documented and analyzed using digital photography to analyze strain on the strain gage itself. Calibration factors for each geosynthtics used in the full scale field testing were developed. In addition, the process was thoroughly documented to allow future researchers to calibrate additional strain gage and geosynthetic combinations. The sensitivity of the strain gages to installation errors was analyzed using wide width tensile testing and digital photography to determine the variability of the data collected from gages with noticeable installation errors as compared to properly installed gages. Induced errors varied based on the parent geosynthetics material, but included excessive and minimal waterproofing, gage rotation, gage shift, excessive and minimal adhesive, and excessive and minimal adhesive impregnation loads. The results of this work indicated that minor errors in geotextile gage installation that are noticeable and preventable by the experienced installer have no statistical significance on the data recorded during the life span of geotextile gages; however the lifespan of the gage may be noticeably shortened by such errors. Geogrid gage installation errors were found to cause statistically significant changes in the data recorded from improper installations. The issue of gage survivability was analyzed using small scale test sections instrumented and loaded similarly to field conditions anticipated during traditional roadway construction. Five methods of protection were tested for both geotextile and geogrid including a sand blanket, inversion, semi-hemispherical PCV sections, neoprene mats, and geosynthetic wick drain. Based on this testing neoprene mats were selected to protect geotextile installed gages, and wick drains were selected to protect geogrid installed gages. These methods resulted in survivability rates of 73% and 100% in the full scale installation respectively. This research and documentation may be used to train technicians to install and calibrate geosynthetic mounted foil type strain gages. In addition, technicians should be able to install gages in the field with a high probability of gage survivability using the protection methods recommended.

  1. Performance evaluation of canny edge detection on a tiled multicore architecture

    NASA Astrophysics Data System (ADS)

    Brethorst, Andrew Z.; Desai, Nehal; Enright, Douglas P.; Scrofano, Ronald

    2011-01-01

    In the last few years, a variety of multicore architectures have been used to parallelize image processing applications. In this paper, we focus on assessing the parallel speed-ups of different Canny edge detection parallelization strategies on the Tile64, a tiled multicore architecture developed by the Tilera Corporation. Included in these strategies are different ways Canny edge detection can be parallelized, as well as differences in data management. The two parallelization strategies examined were loop-level parallelism and domain decomposition. Loop-level parallelism is achieved through the use of OpenMP,1 and it is capable of parallelization across the range of values over which a loop iterates. Domain decomposition is the process of breaking down an image into subimages, where each subimage is processed independently, in parallel. The results of the two strategies show that for the same number of threads, programmer implemented, domain decomposition exhibits higher speed-ups than the compiler managed, loop-level parallelism implemented with OpenMP.

  2. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  3. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    USDA-ARS?s Scientific Manuscript database

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  4. Solar Ready Vets Curriculum Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalstrom, Tenley

    The 5-week SRV program includes four sets of program learning goals aligned around (1) the NABCEP Entry Level body of knowledge; (2) gaining hands-on experience with solar system site analysis, design, installation, commissioning, operation, maintenance and financial considerations; (3) Safety issues unique to solar + OSHA 30; (4) Transition planning and individual support of entry into the solar industry. These goals, and the learning objectives associate with each, are pursued in parallel during the course.

  5. Massively parallel information processing systems for space applications

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.

    1979-01-01

    NASA is developing massively parallel systems for ultra high speed processing of digital image data collected by satellite borne instrumentation. Such systems contain thousands of processing elements. Work is underway on the design and fabrication of the 'Massively Parallel Processor', a ground computer containing 16,384 processing elements arranged in a 128 x 128 array. This computer uses existing technology. Advanced work includes the development of semiconductor chips containing thousands of feedthrough paths. Massively parallel image analog to digital conversion technology is also being developed. The goal is to provide compact computers suitable for real-time onboard processing of images.

  6. Parallel log structured file system collective buffering to achieve a compact representation of scientific and/or dimensional data

    DOEpatents

    Grider, Gary A.; Poole, Stephen W.

    2015-09-01

    Collective buffering and data pattern solutions are provided for storage, retrieval, and/or analysis of data in a collective parallel processing environment. For example, a method can be provided for data storage in a collective parallel processing environment. The method comprises receiving data to be written for a plurality of collective processes within a collective parallel processing environment, extracting a data pattern for the data to be written for the plurality of collective processes, generating a representation describing the data pattern, and saving the data and the representation.

  7. Integrated monitoring technologies for the management of a Soil-Aquifer-Treatment (SAT) system.

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Alexandros; Kallioras, Andreas; Kofakis, Petros; Bumberger, Jan; Schmidt, Felix; Athanasiou, Georgios; Uzunoglou, Nikolaos; Amditis, Angelos; Dietrich, Peter

    2016-04-01

    Artificial recharge of groundwater has an important role to play in water reuse as treated wastewater effluent can be infiltrated into the ground for aquifer recharge. As the effluent moves through the soil and the aquifer, it undergoes significant quality improvements through physical, chemical, and biological processes in the underground environment. Collectively, these processes and the water quality improvement obtained are called soil-aquifer-treatment (SAT) or geopurification. The pilot site of Lavrion Technological & Cultural Park (LTCP) of the National Technical University of Athens (NTUA), involves the employment of plot infiltration basins at experimental scale, which will be using waters of impaired quality as a recharge source, and hence acting as a Soil-Aquifer-Treatment, SAT, system. Τhe LTCP site will be employed as a pilot SAT system complemented by new technological developments, which will be providing continuous monitoring of the quantitative and qualitative characteristics of infiltrating groundwater through all hydrologic zones (i.e. surface, unsaturated and saturated zone). This will be achieved by the development and installation of an integrated system of prototype sensing technologies, installed on-site, and offering a continuous evaluation of the performance of the SAT system. An integrated approach of the performance evaluation of any operating SAT system should aim at parallel monitoring of all hydrologic zones, proving the sustainability of all involved water quality treatment processes within unsaturated and saturated zone. Hence a prototype system of Time and Frequency Domain Reflectometry (TDR & FDR) sensors is developed and will be installed, in order to achieve continuous quantitative monitoring of the unsaturated zone through the entire soil column down to significant depths below the SAT basin. Additionally, the system contains two different radar-based sensing systems that will be offering (i) identification of preferential flow effects of the TDR/FDR sensors and (ii) monitoring of the water table within the shallow karst aquifer layer. The above technique will offer continuous monitoring of infiltration rates and identify possible mechanical or biological clogging effects. The monitoring system will be connected to an ad-hoc wireless network for continuous data transfer within the SAT facilities. It is envisaged that the development and combined application of all the above technologies will provide an integrated monitoring platform for the evaluation of SAT system performance.

  8. schwimmbad: A uniform interface to parallel processing pools in Python

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.; Foreman-Mackey, Daniel

    2017-09-01

    Many scientific and computing problems require doing some calculation on all elements of some data set. If the calculations can be executed in parallel (i.e. without any communication between calculations), these problems are said to be perfectly parallel. On computers with multiple processing cores, these tasks can be distributed and executed in parallel to greatly improve performance. A common paradigm for handling these distributed computing problems is to use a processing "pool": the "tasks" (the data) are passed in bulk to the pool, and the pool handles distributing the tasks to a number of worker processes when available. schwimmbad provides a uniform interface to parallel processing pools and enables switching easily between local development (e.g., serial processing or with multiprocessing) and deployment on a cluster or supercomputer (via, e.g., MPI or JobLib).

  9. Parallel Signal Processing and System Simulation using aCe

    NASA Technical Reports Server (NTRS)

    Dorband, John E.; Aburdene, Maurice F.

    2003-01-01

    Recently, networked and cluster computation have become very popular for both signal processing and system simulation. A new language is ideally suited for parallel signal processing applications and system simulation since it allows the programmer to explicitly express the computations that can be performed concurrently. In addition, the new C based parallel language (ace C) for architecture-adaptive programming allows programmers to implement algorithms and system simulation applications on parallel architectures by providing them with the assurance that future parallel architectures will be able to run their applications with a minimum of modification. In this paper, we will focus on some fundamental features of ace C and present a signal processing application (FFT).

  10. Parallel processing in finite element structural analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1987-01-01

    A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).

  11. Radiation shielding for gamma stereotactic radiosurgery units

    PubMed Central

    2007-01-01

    Shielding calculations for gamma stereotactic radiosurgery units are complicated by the fact that the radiation is highly anisotropic. Shielding design for these devices is unique. Although manufacturers will answer questions about the data that they provide for shielding evaluation, they will not perform calculations for customers. More than 237 such units are now installed in centers worldwide. Centers installing a gamma radiosurgery unit find themselves in the position of having to either invent or reinvent a method for performing shielding design. This paper introduces a rigorous and conservative method for barrier design for gamma stereotactic radiosurgery treatment rooms. This method should be useful to centers planning either to install a new unit or to replace an existing unit. The method described here is consistent with the principles outlined in Report No. 151 from the U.S. National Council on Radiation Protection and Measurements. In as little as 1 hour, a simple electronic spreadsheet can be set up, which will provide radiation levels on planes parallel to the barriers and 0.3 m outside the barriers. PACS numbers: 87.53.Ly, 87.56By, 87.52Tr

  12. Connectionism, parallel constraint satisfaction processes, and gestalt principles: (re) introducing cognitive dynamics to social psychology.

    PubMed

    Read, S J; Vanman, E J; Miller, L C

    1997-01-01

    We argue that recent work in connectionist modeling, in particular the parallel constraint satisfaction processes that are central to many of these models, has great importance for understanding issues of both historical and current concern for social psychologists. We first provide a brief description of connectionist modeling, with particular emphasis on parallel constraint satisfaction processes. Second, we examine the tremendous similarities between parallel constraint satisfaction processes and the Gestalt principles that were the foundation for much of modem social psychology. We propose that parallel constraint satisfaction processes provide a computational implementation of the principles of Gestalt psychology that were central to the work of such seminal social psychologists as Asch, Festinger, Heider, and Lewin. Third, we then describe how parallel constraint satisfaction processes have been applied to three areas that were key to the beginnings of modern social psychology and remain central today: impression formation and causal reasoning, cognitive consistency (balance and cognitive dissonance), and goal-directed behavior. We conclude by discussing implications of parallel constraint satisfaction principles for a number of broader issues in social psychology, such as the dynamics of social thought and the integration of social information within the narrow time frame of social interaction.

  13. Static analysis of the hull plate using the finite element method

    NASA Astrophysics Data System (ADS)

    Ion, A.

    2015-11-01

    This paper aims at presenting the static analysis for two levels of a container ship's construction as follows: the first level is at the girder / hull plate and the second level is conducted at the entire strength hull of the vessel. This article will describe the work for the static analysis of a hull plate. We shall use the software package ANSYS Mechanical 14.5. The program is run on a computer with four Intel Xeon X5260 CPU processors at 3.33 GHz, 32 GB memory installed. In terms of software, the shared memory parallel version of ANSYS refers to running ANSYS across multiple cores on a SMP system. The distributed memory parallel version of ANSYS (Distributed ANSYS) refers to running ANSYS across multiple processors on SMP systems or DMP systems.

  14. Using Parallel Processing for Problem Solving.

    DTIC Science & Technology

    1979-12-01

    are the basic parallel proces- sing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities...Language primitives are provided for manipulating running activities. Viewpoints are a generalization of context FOM -(over "*’ DD I FON 1473 ’EDITION OF I...arc the basic parallel processing primitive . Different goals of the system can be pursued in parallel by placing them in separate activities. Language

  15. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  16. Image Processing Using a Parallel Architecture.

    DTIC Science & Technology

    1987-12-01

    ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than

  17. Display device for indicating the value of a parameter in a process plant

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  18. Indicator system for a process plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  19. Utility installation review system : implementation report.

    DOT National Transportation Integrated Search

    2009-03-01

    Each year, the Texas Department of Transportation (TxDOT) issues thousands of approvals that enable new : utility installations to occupy the state right of way (ROW). The current utility installation review process : relies on the physical delivery ...

  20. 78 FR 72016 - User Fees for Processing Installment Agreements and Offers in Compromise

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-02

    ... regulations affect taxpayers who wish to pay their federal tax liabilities through installment agreements and... to pay $43 for any new installment agreement, including a direct debit installment agreement. The... do not have the means to pay the user fee, even at the reduced rate. The commenter stated that low...

  1. Thiokol/Wasatch installation evaluation of the redesigned field joint protection system (concepts 1 and 3)

    NASA Technical Reports Server (NTRS)

    Cook, M.

    1989-01-01

    The procedures, performance, and results obtained from the Thiokol Corporation/Wasatch Redesigned Field Joint Protection System (FJPS) Installation Evaluation are documented. The purpose of the evaluation was to demonstrate and develop the procedures required to install two different concepts (referred to as Concepts 1 and 3) of the redesigned FJPS. The processing capability of each configuration was then evaluated and compared. The FJPS is installed on redesigned solid rocket motors (RSRM) to protect the field joints from rain intrusion and to maintain the joint temperature sensor measurement between 85 and 122 F while the boosters are on the launch pad. The FJPS is being redesigned to reduce installation timelines at KSC and to simplify or eliminate installation processing problems related to the present design of an EPDM moisture seal/extruded cork combination. Several installation techniques were evaluated, and a preferred method of application was developed for each concept. The installations were performed with the test article in the vertical (flight) position. Comparative timelines between the two concepts were also developed. An additional evaluation of the Concept 3 configuration was performed with the test article in the horizontal position, to simulate an overhead installation on a technical evaluation motor (TEM).

  2. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    PubMed

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  3. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  4. Search asymmetries: parallel processing of uncertain sensory information.

    PubMed

    Vincent, Benjamin T

    2011-08-01

    What is the mechanism underlying search phenomena such as search asymmetry? Two-stage models such as Feature Integration Theory and Guided Search propose parallel pre-attentive processing followed by serial post-attentive processing. They claim search asymmetry effects are indicative of finding pairs of features, one processed in parallel, the other in serial. An alternative proposal is that a 1-stage parallel process is responsible, and search asymmetries occur when one stimulus has greater internal uncertainty associated with it than another. While the latter account is simpler, only a few studies have set out to empirically test its quantitative predictions, and many researchers still subscribe to the 2-stage account. This paper examines three separate parallel models (Bayesian optimal observer, max rule, and a heuristic decision rule). All three parallel models can account for search asymmetry effects and I conclude that either people can optimally utilise the uncertain sensory data available to them, or are able to select heuristic decision rules which approximate optimal performance. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. 77 FR 47573 - Approval and Promulgation of Implementation Plans; Mississippi; 110(a)(2)(E)(ii) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... Mississippi Department of Environmental Quality (MDEQ), on July 13, 2012, for parallel processing. This... of Contents I. What is parallel processing? II. Background III. What elements are required under... Executive Order Reviews I. What is parallel processing? Consistent with EPA regulations found at 40 CFR Part...

  6. Double Take: Parallel Processing by the Cerebral Hemispheres Reduces Attentional Blink

    ERIC Educational Resources Information Center

    Scalf, Paige E.; Banich, Marie T.; Kramer, Arthur F.; Narechania, Kunjan; Simon, Clarissa D.

    2007-01-01

    Recent data have shown that parallel processing by the cerebral hemispheres can expand the capacity of visual working memory for spatial locations (J. F. Delvenne, 2005) and attentional tracking (G. A. Alvarez & P. Cavanagh, 2005). Evidence that parallel processing by the cerebral hemispheres can improve item identification has remained elusive.…

  7. Algorithms and software for solving finite element equations on serial and parallel architectures

    NASA Technical Reports Server (NTRS)

    Chu, Eleanor; George, Alan

    1988-01-01

    The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the Computational Structural Mechanics (MSC) testbed. One of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A brief overview of the CSM Testbed software and its usage is presented. An overview of the sparse matrix research for the Testbed currently employed in the CSM Testbed is given. An interface which was designed and implemented as a research tool for installing and appraising new matrix processors in the CSM Testbed is described. The results of numerical experiments performed in solving a set of testbed demonstration problems using the processor SPK and other experimental processors are contained.

  8. Design Considerations | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  9. Gas Fills | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  10. Understanding Windows | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  11. Books & Publications | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  12. Efficient Windows Collaborative | Home

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  13. Denver International Airport sensor processing and database

    DOT National Transportation Integrated Search

    2000-03-01

    Data processing and database design is described for an instrumentation system installed on runway 34R at Denver International Airport (DIA). Static (low-speed) and dynamic (high-speed) sensors are installed in the pavement. The static sensors includ...

  14. Resources | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  15. Provide Views | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  16. Links | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  17. Reducing Condensation | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  18. Reduced Fading | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  19. EWC Membership | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  20. Visible Transmittance | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  1. EWC Members | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  2. Financing & Incentives | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  3. Installation of surface-mounted flat-conductor cable

    NASA Technical Reports Server (NTRS)

    Carden, J. R.

    1976-01-01

    Guide describes step-by-step process for installation of interior surface-mounted FCC used in commerical and residential buildings. Photographs illustrate how cable-riser and baseboard covers are installed as well as receptacle assembly and receptacle-cover replacement.

  4. Utility installation review (UIR) system training materials.

    DOT National Transportation Integrated Search

    2008-10-01

    The Texas Department of Transportation (TxDOT) issues thousands of approvals every year that : enable new utility installations to occupy the state right-of-way (ROW). The utility installation : review process currently in place is manual, tedious, a...

  5. On the costs of parallel processing in dual-task performance: The case of lexical processing in word production.

    PubMed

    Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D

    2015-12-01

    Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).

  6. Graphical Representation of Parallel Algorithmic Processes

    DTIC Science & Technology

    1990-12-01

    interface with the AAARF main process . The source code for the AAARF class-common library is in the common subdi- rectory and consists of the following files... for public release; distribution unlimited AFIT/GCE/ENG/90D-07 Graphical Representation of Parallel Algorithmic Processes THESIS Presented to the...goal of this study is to develop an algorithm animation facility for parallel processes executing on different architectures, from multiprocessor

  7. Evaluation of Apache Hadoop for parallel data analysis with ROOT

    NASA Astrophysics Data System (ADS)

    Lehrack, S.; Duckeck, G.; Ebke, J.

    2014-06-01

    The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.

  8. Subsidence from an artificial permafrost warming experiment.

    NASA Astrophysics Data System (ADS)

    Gelvin, A.; Wagner, A. M.; Lindsey, N.; Dou, S.; Martin, E. R.; Ekblaw, I.; Ulrich, C.; James, S. R.; Freifeld, B. M.; Daley, T. M.; Saari, S.; Ajo Franklin, J. B.

    2017-12-01

    Using fiber optic sensing technologies (seismic, strain, and temperature) we installed a geophysical detection system to predict thaw subsidence in Fairbanks, Alaska, United States. Approximately 5 km of fiber optic was buried in shallow trenches (20 cm depth), in an area with discontinuous permafrost, where the top of the permafrost is approximately 4 - 4.5m below the surface. The thaw subsidence was enforced by 122 60-Watt vertical heaters installed over a 140 m2 area where seismic, strain, and temperature were continuously monitored throughout the length of the fiber. Several vertical thermistor strings were also recording ground temperatures to a depth of 10 m in parallel to the fiber optic to verify the measurements collected from the fiber optic cable. GPS, Electronic Distance Measurement (EDM) Traditional and LiDAR (Light and Detection and Ranging) scanning were used to investigate the surface subsidence. The heaters were operating for approximately a three month period starting in August, 2016. During the heating process the soil temperatures at the heater element increased from 3.5 to 45 °C at a depth of 3 - 4 m. It took approximately 7 months for the temperature at the heater elements to recover to their initial temperature. The depth to the permafrost table was deepened by about 1 m during the heating process. By the end of the active heating, the surface had subsided approximately 8 cm in the heating section where permafrost was closest to the surface. This was conclusively confirmed with GPS, EDM, and LiDAR. An additional LiDAR survey was performed about seven months after the heaters were turned off (in May 2017). A total subsidence of approximately 20 cm was measured by the end of the passive heating process. This project successfully demonstrates that this is a viable approach for simulating both deep permafrost thaw and the resulting surface subsidence.

  9. Characterization and Modeling of a Control Moment Gyroscope

    DTIC Science & Technology

    2015-03-26

    parallel, and angular directions [16]. The rotor is powered by a brushless DC motor rated to 557.9 mN-m (4.938 in-lbf) [4]. The motor has Hall effect ...mass balance installed on rotor housing Gimbal Balancing Test Procedures. To evaluate the effectiveness of the mass balance, the gimbal was tested...in which the rotor is running The vehicle-level model test (Section 4.9) predicts the effects of CMG gear lash on overall vehicle performance. Gear

  10. Comparing a new laser strainmeter array with an adjacent, parallel running quartz tube strainmeter array.

    PubMed

    Kobe, Martin; Jahr, Thomas; Pöschel, Wolfgang; Kukowski, Nina

    2016-03-01

    In summer 2011, two new laser strainmeters about 26.6 m long were installed in N-S and E-W directions parallel to an existing quartz tube strainmeter system at the Geodynamic Observatory Moxa, Thuringia/Germany. This kind of installation is unique in the world and allows the direct comparison of measurements of horizontal length changes with different types of strainmeters for the first time. For the comparison of both data sets, we used the tidal analysis over three years, the strain signals resulting from drilling a shallow 100 m deep borehole on the ground of the observatory and long-period signals. The tidal strain amplitude factors of the laser strainmeters are found to be much closer to theoretical values (85%-105% N-S and 56%-92% E-W) than those of the quartz tube strainmeters. A first data analysis shows that the new laser strainmeters are more sensitive in the short-periodic range with an improved signal-to-noise ratio and distinctly more stable during long-term drifts of environmental parameters such as air pressure or groundwater level. We compared the signal amplitudes of both strainmeter systems at variable signal periods and found frequency-dependent amplitude differences. Confirmed by the tidal parameters, we have now a stable and high resolution laser strainmeter system that serves as calibration reference for quartz tube strainmeters.

  11. Benefits of Efficient Windows | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  12. Increased Light & View | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  13. Windows for New Construction | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  14. Performance Standards for Windows | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  15. Window Selection Tool | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  16. Air Leakage (AL) | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  17. State Fact Sheets | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  18. Fact Sheets & Publications | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  19. Condensation Resistance (CR) | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  20. Assessing Window Replacement Options | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  1. National Fenestration Rating Council (NFRC) | Efficient Windows

    Science.gov Websites

    Collaborative Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring

  2. Low Conductance Spacers | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  3. Energy & Cost Savings | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  4. U-Factor (U-value) | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  5. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  6. National Study of Word Processing Installations in Selected Business Organizations. A Report on the National Word Processing Research Study of Delta Pi Epsilon.

    ERIC Educational Resources Information Center

    Scriven, Jolene D.; And Others

    A study was conducted (1) to determine current practices in word processing installations in selected organizations throughout the United States, and (2) to ascertain anticipated future developments in word processing as well as to provide recommendations for educational institutions that prepare workers for business offices. Seven interview…

  7. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  8. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  9. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  10. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services

    PubMed Central

    Castaño-Díez, Daniel

    2017-01-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance. PMID:28580909

  11. The Dynamo package for tomography and subtomogram averaging: components for MATLAB, GPU computing and EC2 Amazon Web Services.

    PubMed

    Castaño-Díez, Daniel

    2017-06-01

    Dynamo is a package for the processing of tomographic data. As a tool for subtomogram averaging, it includes different alignment and classification strategies. Furthermore, its data-management module allows experiments to be organized in groups of tomograms, while offering specialized three-dimensional tomographic browsers that facilitate visualization, location of regions of interest, modelling and particle extraction in complex geometries. Here, a technical description of the package is presented, focusing on its diverse strategies for optimizing computing performance. Dynamo is built upon mbtools (middle layer toolbox), a general-purpose MATLAB library for object-oriented scientific programming specifically developed to underpin Dynamo but usable as an independent tool. Its structure intertwines a flexible MATLAB codebase with precompiled C++ functions that carry the burden of numerically intensive operations. The package can be delivered as a precompiled standalone ready for execution without a MATLAB license. Multicore parallelization on a single node is directly inherited from the high-level parallelization engine provided for MATLAB, automatically imparting a balanced workload among the threads in computationally intense tasks such as alignment and classification, but also in logistic-oriented tasks such as tomogram binning and particle extraction. Dynamo supports the use of graphical processing units (GPUs), yielding considerable speedup factors both for native Dynamo procedures (such as the numerically intensive subtomogram alignment) and procedures defined by the user through its MATLAB-based GPU library for three-dimensional operations. Cloud-based virtual computing environments supplied with a pre-installed version of Dynamo can be publicly accessed through the Amazon Elastic Compute Cloud (EC2), enabling users to rent GPU computing time on a pay-as-you-go basis, thus avoiding upfront investments in hardware and longterm software maintenance.

  12. Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers

    ERIC Educational Resources Information Center

    Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph

    2015-01-01

    In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…

  13. Structural considerations for solar installers : an approach for small, simplified solar installations or retrofits.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richards, Elizabeth H.; Schindel, Kay; Bosiljevac, Tom

    2011-12-01

    Structural Considerations for Solar Installers provides a comprehensive outline of structural considerations associated with simplified solar installations and recommends a set of best practices installers can follow when assessing such considerations. Information in the manual comes from engineering and solar experts as well as case studies. The objectives of the manual are to ensure safety and structural durability for rooftop solar installations and to potentially accelerate the permitting process by identifying and remedying structural issues prior to installation. The purpose of this document is to provide tools and guidelines for installers to help ensure that residential photovoltaic (PV) power systemsmore » are properly specified and installed with respect to the continuing structural integrity of the building.« less

  14. Design Guidance for New Windows | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  15. Design Guidance for Replacement Windows | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  16. Solar Heat Gain Coefficient (SHGC) | Efficient Windows Collaborative

    Science.gov Websites

    Foundry Foundry New Construction Windows Window Selection Tool Selection Process Design Guidance Installation Replacement Windows Window Selection Tool Assessing Options Selection Process Design Guidance Installation Understanding Windows Benefits Design Considerations Measuring Performance Performance Standards

  17. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.

  18. Parallel processing via a dual olfactory pathway in the honeybee.

    PubMed

    Brill, Martin F; Rosenbaum, Tobias; Reus, Isabelle; Kleineidam, Christoph J; Nawrot, Martin P; Rössler, Wolfgang

    2013-02-06

    In their natural environment, animals face complex and highly dynamic olfactory input. Thus vertebrates as well as invertebrates require fast and reliable processing of olfactory information. Parallel processing has been shown to improve processing speed and power in other sensory systems and is characterized by extraction of different stimulus parameters along parallel sensory information streams. Honeybees possess an elaborate olfactory system with unique neuronal architecture: a dual olfactory pathway comprising a medial projection-neuron (PN) antennal lobe (AL) protocerebral output tract (m-APT) and a lateral PN AL output tract (l-APT) connecting the olfactory lobes with higher-order brain centers. We asked whether this neuronal architecture serves parallel processing and employed a novel technique for simultaneous multiunit recordings from both tracts. The results revealed response profiles from a high number of PNs of both tracts to floral, pheromonal, and biologically relevant odor mixtures tested over multiple trials. PNs from both tracts responded to all tested odors, but with different characteristics indicating parallel processing of similar odors. Both PN tracts were activated by widely overlapping response profiles, which is a requirement for parallel processing. The l-APT PNs had broad response profiles suggesting generalized coding properties, whereas the responses of m-APT PNs were comparatively weaker and less frequent, indicating higher odor specificity. Comparison of response latencies within and across tracts revealed odor-dependent latencies. We suggest that parallel processing via the honeybee dual olfactory pathway provides enhanced odor processing capabilities serving sophisticated odor perception and olfactory demands associated with a complex olfactory world of this social insect.

  19. Visual analysis of inter-process communication for large-scale parallel computing.

    PubMed

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  20. Parallel processing for nonlinear dynamics simulations of structures including rotating bladed-disk assemblies

    NASA Technical Reports Server (NTRS)

    Hsieh, Shang-Hsien

    1993-01-01

    The principal objective of this research is to develop, test, and implement coarse-grained, parallel-processing strategies for nonlinear dynamic simulations of practical structural problems. There are contributions to four main areas: finite element modeling and analysis of rotational dynamics, numerical algorithms for parallel nonlinear solutions, automatic partitioning techniques to effect load-balancing among processors, and an integrated parallel analysis system.

  1. Reliability and performance experience with flat-plate photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1982-01-01

    Statistical models developed to define the most likely sources of photovoltaic (PV) array failures and the optimum method of allowing for the defects in order to achieve a 20 yr lifetime with acceptable performance degradation are summarized. Significant parameters were the cost of energy, annual power output, initial cost, replacement cost, rate of module replacement, the discount rate, and the plant lifetime. Acceptable degradation allocations were calculated to be 0.0001 cell failures/yr, 0.005 module failures/yr, 0.05 power loss/yr, a 0.01 rate of power loss/yr, and a 25 yr module wear-out length. Circuit redundancy techniques were determined to offset cell failures using fault tolerant designs such as series/parallel and bypass diode arrangements. Screening processes have been devised to eliminate cells that will crack in operation, and multiple electrical contacts at each cell compensate for the cells which escape the screening test and then crack when installed. The 20 yr array lifetime is expected to be achieved in the near-term.

  2. Summary Report of National Study of Word Processing Installations in Selected Business Organizations. A Summary of a Report on the National Word Processing Research Study of Delta Pi Epsilon.

    ERIC Educational Resources Information Center

    Scriven, Jolene D.; And Others

    A study sought to determine current practices in word processing installations located in selected organizations throughout the United States. A related problem was to ascertain anticipated future developments in word processing to provide information for educational institutions preparing workers for the business office. Six interview instruments…

  3. Portable long trace profiler: Concept and solution

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Takacs, Peter; Sostero, Giovanni; Cocco, Daniele

    2001-08-01

    Since the early development of the penta-prism long trace profiler (LTP) and the in situ LTP, and following the completion of the first in situ distortion profile measurements at Sincrotrone Trieste (ELETTRA) in Italy in 1995, a concept was developed for a compact, portable LTP with the following characteristics: easily installed on synchrotron radiation beam lines, easily carried to different laboratories around the world for measurements and calibration, convenient for use in evaluating the LTP as an in-process tool in the optical workshop, and convenient for use in temporarily installation as required by other special applications. The initial design of a compact LTP optical head was made at ELETTRA in 1995. Since 1997 further efforts to reduce the optical head size and weight, and to improve measurement stability have been made at Brookhaven National Laboratory. This article introduces the following solutions and accomplishments for the portable LTP: (1) a new design for a compact and very stable optical head, (2) the use of a small detector connected to a laptop computer directly via an enhanced parallel port, and there is no extra frame grabber interface and control box, (3) a customized small mechanical slide that uses a compact motor with a connector-sized motor controller, and (4) the use of a laptop computer system. These solutions make the portable LTP able to be packed into two laptop-size cases: one for the computer and one for the rest of the system.

  4. Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, M

    2006-12-12

    ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less

  5. Optimizing RF gun cavity geometry within an automated injector design system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alicia Hofler ,Pavel Evtushenko

    2011-03-28

    RF guns play an integral role in the success of several light sources around the world, and properly designed and optimized cw superconducting RF (SRF) guns can provide a path to higher average brightness. As the need for these guns grows, it is important to have automated optimization software tools that vary the geometry of the gun cavity as part of the injector design process. This will allow designers to improve existing designs for present installations, extend the utility of these guns to other applications, and develop new designs. An evolutionary algorithm (EA) based system can provide this capability becausemore » EAs can search in parallel a large parameter space (often non-linear) and in a relatively short time identify promising regions of the space for more careful consideration. The injector designer can then evaluate more cavity design parameters during the injector optimization process against the beam performance requirements of the injector. This paper will describe an extension to the APISA software that allows the cavity geometry to be modified as part of the injector optimization and provide examples of its application to existing RF and SRF gun designs.« less

  6. A small-angle x-ray scattering system with a vertical layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhen; Chen, Xiaowei; Meng, Lingpu

    A small-angle x-ray scattering (SAXS) system with a vertical layout (V-SAXS) has been designed and constructed for in situ detection on nanostructures, which is well suitable for in situ study on self-assembly of nanoparticles at liquid interface and polymer processing. A steel-tower frame on a reinforced basement is built as the supporting skeleton for scattering beam path and detector platform, ensuring the system a high working stability and a high operating accuracy. A micro-focus x-ray source combining parabolic three-dimensional multi-layer mirror and scatteringless collimation system provides a highly parallel beam, which allows us to detect the very small angle range.more » With a sample-to-detector distance of 7 m, the largest measurable length scale is 420 nm in real space. With a large sample zone, it is possible to install different experimental setups such as film stretching machine, which makes the system perfect to follow the microstructures evolution of materials during processing. The capability of the V-SAXS on in situ study is tested with a drying experiment of a free latex droplet, which confirms our initial design.« less

  7. Pressure Measurement Systems

    NASA Astrophysics Data System (ADS)

    1990-01-01

    System 8400 is an advanced system for measurement of gas and liquid pressure, along with a variety of other parameters, including voltage, frequency and digital inputs. System 8400 offers exceptionally high speed data acquisition through parallel processing, and its modular design allows expansion from a relatively inexpensive entry level system by the addition of modular Input Units that can be installed or removed in minutes. Douglas Juanarena was on the team of engineers that developed a new technology known as ESP (electronically scanned pressure). The Langley ESP measurement system was based on miniature integrated circuit pressure-sensing transducers that communicated pressure information to a minicomputer. In 1977, Juanarena formed PSI to exploit the NASA technology. In 1978 he left Langley, obtained a NASA license for the technology, introduced the first commercial product, the 780B pressure measurement system. PSI developed a pressure scanner for automation of industrial processes. Now in its second design generation, the DPT-6400 is capable of making 2,000 measurements a second and has 64 channels by addition of slave units. New system 8400 represents PSI's bid to further exploit the 600 million U.S. industrial pressure measurement market. It is geared to provide a turnkey solution to physical measurement.

  8. Indicator system for advanced nuclear plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  9. Console for a nuclear control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  10. Alarm system for a nuclear control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1994-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  11. Advanced nuclear plant control complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  12. Advanced nuclear plant control room complex

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1993-01-01

    An advanced control room complex for a nuclear power plant, including a discrete indicator and alarm system (72) which is nuclear qualified for rapid response to changes in plant parameters and a component control system (64) which together provide a discrete monitoring and control capability at a panel (14-22, 26, 28) in the control room (10). A separate data processing system (70), which need not be nuclear qualified, provides integrated and overview information to the control room and to each panel, through CRTs (84) and a large, overhead integrated process status overview board (24). The discrete indicator and alarm system (72) and the data processing system (70) receive inputs from common plant sensors and validate the sensor outputs to arrive at a representative value of the parameter for use by the operator during both normal and accident conditions, thereby avoiding the need for him to assimilate data from each sensor individually. The integrated process status board (24) is at the apex of an information hierarchy that extends through four levels and provides access at each panel to the full display hierarchy. The control room panels are preferably of a modular construction, permitting the definition of inputs and outputs, the man machine interface, and the plant specific algorithms, to proceed in parallel with the fabrication of the panels, the installation of the equipment and the generic testing thereof.

  13. RDX/HMX Plant Design

    DTIC Science & Technology

    1981-05-01

    coating process in Explosives Manufacturing Line 2. The end products of the initial design effort are process flow diagrams, piping and...instrumentation diagrams, motor control schedules, interlock logic diagrams, piping installation drawings, typical instrument Installation details, equipment...structures, equipment, utilities, and process piping extending 1.5 m (5 ft) beyond the building or area were not included in the scope of work. Nitrolysis

  14. Leukemia-related mortality in towns lying in the vicinity of metal production and processing installations.

    PubMed

    García-Pérez, Javier; López-Cima, María Felicitas; Boldo, Elena; Fernández-Navarro, Pablo; Aragonés, Nuria; Pollán, Marina; Pérez-Gómez, Beatriz; López-Abente, Gonzalo

    2010-10-01

    Releases to the environment of toxic substances stemming from industrial metal production and processing installations can pose a health problem to populations in their vicinity. To investigate whether there might be excess leukemia-related mortality in populations residing in towns in the vicinity of Spanish metal industries included in the European Pollutant Emission Register. Ecologic study designed to examine mortality due to leukemia at a municipal level, during the period 1994-2003. Population exposure to pollution was estimated on the basis of distance from town of residence to pollution source. Using Poisson regression models, we analyzed: risk of dying from leukemia in a 5-kilometer zone around installations which had become operational prior to 1990; effect of pollution discharge route and type of industrial activity; and risk gradient within a 50-kilometer radius of such installations. Excess mortality (relative risk, 95% confidence interval) was detected in the vicinity of pre-1990 installations (1.07, 1.02-1.13 in men; 1.05, 1.00-1.11 in women), with this being more elevated in the case of installations that released pollution to air versus water. On stratifying by type of industrial activity, statistically significant associations were also observed among women residing in the vicinity of galvanizing installations (1.58, 1.09-2.29) and surface-treatment installations using an electrolytic or chemical process (1.34, 1.10-1.62), which released pollution to air. There was an effect whereby risk increased with proximity to certain installations. The results suggest an association between risk of dying due to leukemia and proximity to Spanish metal industries. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study.

    PubMed

    Klingner, Carsten M; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI.

  16. The Processing of Somatosensory Information Shifts from an Early Parallel into a Serial Processing Mode: A Combined fMRI/MEG Study

    PubMed Central

    Klingner, Carsten M.; Brodoehl, Stefan; Huonker, Ralph; Witte, Otto W.

    2016-01-01

    The question regarding whether somatosensory inputs are processed in parallel or in series has not been clearly answered. Several studies that have applied dynamic causal modeling (DCM) to fMRI data have arrived at seemingly divergent conclusions. However, these divergent results could be explained by the hypothesis that the processing route of somatosensory information changes with time. Specifically, we suggest that somatosensory stimuli are processed in parallel only during the early stage, whereas the processing is later dominated by serial processing. This hypothesis was revisited in the present study based on fMRI analyses of tactile stimuli and the application of DCM to magnetoencephalographic (MEG) data collected during sustained (260 ms) tactile stimulation. Bayesian model comparisons were used to infer the processing stream. We demonstrated that the favored processing stream changes over time. We found that the neural activity elicited in the first 100 ms following somatosensory stimuli is best explained by models that support a parallel processing route, whereas a serial processing route is subsequently favored. These results suggest that the secondary somatosensory area (SII) receives information regarding a new stimulus in parallel with the primary somatosensory area (SI), whereas later processing in the SII is dominated by the preprocessed input from the SI. PMID:28066197

  17. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  18. Workshop on Solid State Switches for Pulsed Power, held January 12-14, 1983 at Tamarron, Colorado

    DTIC Science & Technology

    1983-05-31

    of its anticipated scalabil- ity. However, the projected performance of other types of dis- crete switches made their continued exploration and...linking of "asynchronous AC power grids. Some present installations arid projected increases are showr. in Table 2. A new commercial power application...Average Power 62.5 KW 160 KW Device RBDT (RSR) T60R SCR 2N3873 Arra , 6 Series 10 Parallel-20 Series Table 18. Applications of solid state pulse

  19. The Impact of City-level Permitting Processes on Residential Photovoltaic Installation Prices and Development Times: An Empirical Analysis of Solar Systems in California Cities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiser, Ryan; Dong, Changgui

    Business process or “soft” costs account for well over 50% of the installed price of residential photovoltaic (PV) systems in the United States, so understanding these costs is crucial for identifying PV cost-reduction opportunities. Among these costs are those imposed by city-level permitting processes, which may add both expense and time to the PV development process. Building on previous research, this study evaluates the effect of city-level permitting processes on the installed price of residential PV systems and on the time required to develop and install those systems. The study uses a unique dataset from the U.S. Department of Energy’smore » Rooftop Solar Challenge Program, which includes city-level permitting process “scores,” plus data from the California Solar Initiative and the U.S. Census. Econometric methods are used to quantify the price and development-time effects of city-level permitting processes on more than 3,000 PV installations across 44 California cities in 2011. Results indicate that city-level permitting processes have a substantial and statistically significant effect on average installation prices and project development times. The results suggest that cities with the most favorable (i.e., highest-scoring) permitting practices can reduce average residential PV prices by $0.27–$0.77/W (4%–12% of median PV prices in California) compared with cities with the most onerous (i.e., lowest-scoring) permitting practices, depending on the regression model used. Though the empirical models for development times are less robust, results suggest that the most streamlined permitting practices may shorten development times by around 24 days on average (25% of the median development time). These findings illustrate the potential price and development-time benefits of streamlining local permitting procedures for PV systems.« less

  20. Automated System of Diagnostic Monitoring at Bureya HPP Hydraulic Engineering Installations: a New Level of Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musyurka, A. V., E-mail: musyurkaav@burges.rushydro.ru

    This article presents the design, hardware, and software solutions developed and placed in service for the automated system of diagnostic monitoring (ASDM) for hydraulic engineering installations at the Bureya HPP, and assuring a reliable process for monitoring hydraulic engineering installations. Project implementation represents a timely solution of problems addressed by the hydraulic engineering installation diagnostics section.

  1. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  2. MetaQuant: a tool for the automatic quantification of GC/MS-based metabolome data.

    PubMed

    Bunk, Boyke; Kucklick, Martin; Jonas, Rochus; Münch, Richard; Schobert, Max; Jahn, Dieter; Hiller, Karsten

    2006-12-01

    MetaQuant is a Java-based program for the automatic and accurate quantification of GC/MS-based metabolome data. In contrast to other programs MetaQuant is able to quantify hundreds of substances simultaneously with minimal manual intervention. The integration of a self-acting calibration function allows the parallel and fast calibration for several metabolites simultaneously. Finally, MetaQuant is able to import GC/MS data in the common NetCDF format and to export the results of the quantification into Systems Biology Markup Language (SBML), Comma Separated Values (CSV) or Microsoft Excel (XLS) format. MetaQuant is written in Java and is available under an open source license. Precompiled packages for the installation on Windows or Linux operating systems are freely available for download. The source code as well as the installation packages are available at http://bioinformatics.org/metaquant

  3. Ir-Catalyzed, Silyl-Directed, peri-Borylation of C-H Bonds in Fused Polycyclic Arenes and Heteroarenes.

    PubMed

    Su, Bo; Hartwig, John F

    2018-05-20

    peri-Disubstituted naphthalenes exhibit interesting physical properties and unique chemical reactivity, due to the parallel arrangement of the bonds to the two peri-disposed substituents. Regioselective installation of a functional group at the position peri to 1-substituted naphthalenes is challenging due to the steric interaction between the existing substituent and the position at which the second one would be installed. We report an iridium-catalyzed borylation of the C-H bond peri to a silyl group in naphthalenes and analogous polyaromatic hydrocarbons. The reaction occurs under mild conditions with wide functional group tolerance. The silyl group and the boryl group in the resulting products are precursors to a range of functional groups bound to the naphthalene ring through C-C, C-O, C-N, C-Br and C-Cl bonds. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Field observation of advance warning/advisory signage for passive railway crossings with restricted lateral sightline visibility: an experimental investigation.

    PubMed

    Ward, N J; Wilde, G J

    1995-04-01

    This study evaluated a newly proposed series of signs intended for passive crossings with restrictions to lateral sightline visibility. These signs provide advance warning of a crossing and the restriction to lateral visibility. In addition, the signs advise motorists to come to a complete stop before crossing. Motorist behaviour was examined before and after installation of these signs at a rural passive crossing. A second site was observed in parallel to control partially for any confounding effects. Results indicated that motorists reduced speed and searched approach quadrants longer at points in the approachway after installation of the signs. However, there was no reliable increase in the number of motorists coming to complete stop, engaging in search behaviours, or classified as safe. The results are discussed in terms of reasons for the lack of compliance with the sign advisory.

  5. Control system adds to precipitator efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurrole, G.

    1978-02-01

    An electrostatic precipitator in use at Lion Oil Co., Martinez, Calif., in a fluid catalytic cracking and CO boiler application, was upgraded by mechanical sectionalization of the gas passage and a new electronic control system. The electrostatic precipitator is installed upstream of the CO boiler to handle gas flow up to 4.77 ft/sec, and pressure to 4.5 psi. The independent gas chambers in the electrostatic precipitator were divided by installing gas-tight partition walls to form a total of four electrostatic fields. The precipitator was also equipped with adjustable inlet gas flow-control baffles for even gas distribution. Rows of grounded collectingmore » electrodes are parallel with the flow of gas. The emitting electrode system, powered by separate high-energy transformers for each collecting field, uses silicon-controlled rectifiers and analog electronic networks for rapid response to changing gas and dust conditions. Regulatory requirements call for efficient collection of catalyst fines with no more than 40 lb/hr escaping through the boiler stack. Currently, stack losses average about 38 lb/hr. The installation of two additional control systems with transformers and rectifiers should reduce stack losses to 34 lb/hr.« less

  6. SU-D-213-05: Design, Evaluation and First Applications of a Off-Site State-Of-The-Art 3D Dosimetry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malcolm, J; Mein, S; McNiven, A

    2015-06-15

    Purpose: To design, construct and commission a prototype in-house three dimensional (3D) dose verification system for stereotatic body radiotherapy (SBRT) verification at an off-site partner institution. To investigate the potential of this system to achieve sufficient performance (1mm resolution, 3% noise, within 3% of true dose reading) for SBRT verification. Methods: The system was designed utilizing a parallel ray geometry instigated by precision telecentric lenses and an LED 630nm light source. Using a radiochromic dosimeter, a 3D dosimetric comparison with our gold-standard system and treatment planning software (Eclipse) was done for a four-field box treatment, under gamma passing criteria ofmore » 3%/3mm/10% dose threshold. Post off-site installation, deviations in the system’s dose readout performance was assessed by rescanning the four-field box irradiated dosimeter and using line-profiles to compare on-site and off-site mean and noise levels in four distinct dose regions. As a final step, an end-to-end test of the system was completed at the off-site location, including CT-simulation, irradiation of the dosimeter and a 3D dosimetric comparison of the planned (Pinnacle{sup 3}) to delivered dose for a spinal SBRT treatment(12 Gy per fraction). Results: The noise level in the high and medium dose regions of the four field box treatment was relatively 5% pre and post installation. This reflects the reduction in positional uncertainty through the new design. This At 1mm dose voxels, the gamma pass rates(3%,3mm) for our in-house gold standard system and the off-site system were comparable at 95.8% and 93.2% respectively. Conclusion: This work will describe the end-to-end process and results of designing, installing, and commissioning a state-of-the-art 3D dosimetry system created for verification of advanced radiation treatments including spinal radiosurgery.« less

  7. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  8. Identification, regression and validation of an image processing degradation model to assess the effects of aeromechanical turbulence due to installation aircraft

    NASA Astrophysics Data System (ADS)

    Miccoli, M.; Usai, A.; Tafuto, A.; Albertoni, A.; Togna, F.

    2016-10-01

    The propagation environment around airborne platforms may significantly degrade the performance of Electro-Optical (EO) self-protection systems installed onboard. To ensure the sufficient level of protection, it is necessary to understand that are the best sensors/effectors installation positions to guarantee that the aeromechanical turbulence, generated by the engine exhausts and the rotor downwash, does not interfere with the imaging systems normal operations. Since the radiation-propagation-in-turbulence is a hardly predictable process, it was proposed a high-level approach in which, instead of studying the medium under turbulence, the turbulence effects on the imaging systems processing are assessed by means of an equivalent statistical model representation, allowing a definition of a Turbulence index to classify different level of turbulence intensities. Hence, a general measurement methodology for the degradation of the imaging systems performance in turbulence conditions was developed. The analysis of the performance degradation started by evaluating the effects of turbulences with a given index on the image processing chain (i.e., thresholding, blob analysis). The processing in turbulence (PIT) index is then derived by combining the effects of the given turbulence on the different image processing primitive functions. By evaluating the corresponding PIT index for a sufficient number of testing directions, it is possible to map the performance degradation around the aircraft installation for a generic imaging system, and to identify the best installation position for sensors/effectors composing the EO self-protection suite.

  9. Methods for design and evaluation of parallel computating systems (The PISCES project)

    NASA Technical Reports Server (NTRS)

    Pratt, Terrence W.; Wise, Robert; Haught, Mary JO

    1989-01-01

    The PISCES project started in 1984 under the sponsorship of the NASA Computational Structural Mechanics (CSM) program. A PISCES 1 programming environment and parallel FORTRAN were implemented in 1984 for the DEC VAX (using UNIX processes to simulate parallel processes). This system was used for experimentation with parallel programs for scientific applications and AI (dynamic scene analysis) applications. PISCES 1 was ported to a network of Apollo workstations by N. Fitzgerald.

  10. Parallel computing in genomic research: advances and applications

    PubMed Central

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

  11. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  12. Parallel computing in genomic research: advances and applications.

    PubMed

    Ocaña, Kary; de Oliveira, Daniel

    2015-01-01

    Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

  13. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  14. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  15. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  16. Parallel implementation of all-digital timing recovery for high-speed and real-time optical coherent receivers.

    PubMed

    Zhou, Xian; Chen, Xue

    2011-05-09

    The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America

  17. Spatially parallel processing of within-dimension conjunctions.

    PubMed

    Linnell, K J; Humphreys, G W

    2001-01-01

    Within-dimension conjunction search for red-green targets amongst red-blue, and blue-green, nontargets is extremely inefficient (Wolfe et al, 1990 Journal of Experimental Psychology: Human Perception and Performance 16 879-892). We tested whether pairs of red-green conjunction targets can nevertheless be processed spatially in parallel. Participants made speeded detection responses whenever a red-green target was present. Across trials where a second identical target was present, the distribution of detection times was compatible with the assumption that targets were processed in parallel (Miller, 1982 Cognitive Psychology 14 247-279). We show that this was not an artifact of response-competition or feature-based processing. We suggest that within-dimension conjunctions can be processed spatially in parallel. Visual search for such items may be inefficient owing to within-dimension grouping between items.

  18. Development of a SaaS application probe to the physical properties of the Earth's interior: An attempt at moving HPC to the cloud

    NASA Astrophysics Data System (ADS)

    Huang, Qian

    2014-09-01

    Scientific computing often requires the availability of a massive number of computers for performing large-scale simulations, and computing in mineral physics is no exception. In order to investigate physical properties of minerals at extreme conditions in computational mineral physics, parallel computing technology is used to speed up the performance by utilizing multiple computer resources to process a computational task simultaneously thereby greatly reducing computation time. Traditionally, parallel computing has been addressed by using High Performance Computing (HPC) solutions and installed facilities such as clusters and super computers. Today, it has been seen that there is a tremendous growth in cloud computing. Infrastructure as a Service (IaaS), the on-demand and pay-as-you-go model, creates a flexible and cost-effective mean to access computing resources. In this paper, a feasibility report of HPC on a cloud infrastructure is presented. It is found that current cloud services in IaaS layer still need to improve performance to be useful to research projects. On the other hand, Software as a Service (SaaS), another type of cloud computing, is introduced into an HPC system for computing in mineral physics, and an application of which is developed. In this paper, an overall description of this SaaS application is presented. This contribution can promote cloud application development in computational mineral physics, and cross-disciplinary studies.

  19. Proper Installation Improves Carpet Life.

    ERIC Educational Resources Information Center

    Grogan, Ralph

    1998-01-01

    Explains how proper carpet installation can add to carpet life; includes tips to consider before signing a carpet-installation purchasing agreement that can make the new carpet a better investment. Topics cover how color selection lengthens appearance life, the need for moisture testing, the importance of carpet seams in the purchasing process,…

  20. Hadoop neural network for parallel and distributed feature selection.

    PubMed

    Hodge, Victoria J; O'Keefe, Simon; Austin, Jim

    2016-06-01

    In this paper, we introduce a theoretical basis for a Hadoop-based neural network for parallel and distributed feature selection in Big Data sets. It is underpinned by an associative memory (binary) neural network which is highly amenable to parallel and distributed processing and fits with the Hadoop paradigm. There are many feature selectors described in the literature which all have various strengths and weaknesses. We present the implementation details of five feature selection algorithms constructed using our artificial neural network framework embedded in Hadoop YARN. Hadoop allows parallel and distributed processing. Each feature selector can be divided into subtasks and the subtasks can then be processed in parallel. Multiple feature selectors can also be processed simultaneously (in parallel) allowing multiple feature selectors to be compared. We identify commonalities among the five features selectors. All can be processed in the framework using a single representation and the overall processing can also be greatly reduced by only processing the common aspects of the feature selectors once and propagating these aspects across all five feature selectors as necessary. This allows the best feature selector and the actual features to select to be identified for large and high dimensional data sets through exploiting the efficiency and flexibility of embedding the binary associative-memory neural network in Hadoop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. The FOSS GIS Workbench on the GFZ Load Sharing Facility compute cluster

    NASA Astrophysics Data System (ADS)

    Löwe, P.; Klump, J.; Thaler, J.

    2012-04-01

    Compute clusters can be used as GIS workbenches, their wealth of resources allow us to take on geocomputation tasks which exceed the limitations of smaller systems. To harness these capabilities requires a Geographic Information System (GIS), able to utilize the available cluster configuration/architecture and a sufficient degree of user friendliness to allow for wide application. In this paper we report on the first successful porting of GRASS GIS, the oldest and largest Free Open Source (FOSS) GIS project, onto a compute cluster using Platform Computing's Load Sharing Facility (LSF). In 2008, GRASS6.3 was installed on the GFZ compute cluster, which at that time comprised 32 nodes. The interaction with the GIS was limited to the command line interface, which required further development to encapsulate the GRASS GIS business layer to facilitate its use by users not familiar with GRASS GIS. During the summer of 2011, multiple versions of GRASS GIS (v 6.4, 6.5 and 7.0) were installed on the upgraded GFZ compute cluster, now consisting of 234 nodes with 480 CPUs providing 3084 cores. The GFZ compute cluster currently offers 19 different processing queues with varying hardware capabilities and priorities, allowing for fine-grained scheduling and load balancing. After successful testing of core GIS functionalities, including the graphical user interface, mechanisms were developed to deploy scripted geocomputation tasks onto dedicated processing queues. The mechanisms are based on earlier work by NETELER et al. (2008). A first application of the new GIS functionality was the generation of maps of simulated tsunamis in the Mediterranean Sea for the Tsunami Atlas of the FP-7 TRIDEC Project (www.tridec-online.eu). For this, up to 500 processing nodes were used in parallel. Further trials included the processing of geometrically complex problems, requiring significant amounts of processing time. The GIS cluster successfully completed all these tasks, with processing times lasting up to full 20 CPU days. The deployment of GRASS GIS on a compute cluster allows our users to tackle GIS tasks previously out of reach of single workstations. In addition, this GRASS GIS cluster implementation will be made available to other users at GFZ in the course of 2012. It will thus become a research utility in the sense of "Software as a Service" (SaaS) and can be seen as our first step towards building a GFZ corporate cloud service.

  2. The Role of Fresh Water in Fish Processing in Antiquity

    NASA Astrophysics Data System (ADS)

    Sánchez López, Elena H.

    2018-04-01

    Water has been traditionally highlighted (together with fish and salt) as one of the essential elements in fish processing. Indeed, the need for large quantities of fresh water for the production of salted fish and fish sauces in Roman times is commonly asserted. This paper analyses water-related structures within Roman halieutic installations, arguing that their common presence in the best known fish processing installations in the Western Roman world should be taken as evidence of the use of fresh water during the production processes, even if its role in the activities carried out in those installations is not clear. In addition, the text proposes some first estimates on the amount of water that could be needed by those fish processing complexes for their functioning, concluding that water needs to be taken into account when reconstructing fish-salting recipes.

  3. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single onemore » of the endpoints in the geometry, an instruction for the collective operation.« less

  4. [CMACPAR an modified parallel neuro-controller for control processes].

    PubMed

    Ramos, E; Surós, R

    1999-01-01

    CMACPAR is a Parallel Neurocontroller oriented to real time systems as for example Control Processes. Its characteristics are mainly a fast learning algorithm, a reduced number of calculations, great generalization capacity, local learning and intrinsic parallelism. This type of neurocontroller is used in real time applications required by refineries, hydroelectric centers, factories, etc. In this work we present the analysis and the parallel implementation of a modified scheme of the Cerebellar Model CMAC for the n-dimensional space projection using a mean granularity parallel neurocontroller. The proposed memory management allows for a significant memory reduction in training time and required memory size.

  5. Parallel-Processing Test Bed For Simulation Software

    NASA Technical Reports Server (NTRS)

    Blech, Richard; Cole, Gary; Townsend, Scott

    1996-01-01

    Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).

  6. Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures

    NASA Technical Reports Server (NTRS)

    Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.

    2003-01-01

    Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.

  7. Automated Installation Verification of COMSOL via LiveLink for MATLAB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowell, Michael W

    Verifying that a local software installation performs as the developer intends is a potentially time-consuming but necessary step for nuclear safety-related codes. Automating this process not only saves time, but can increase reliability and scope of verification compared to ‘hand’ comparisons. While COMSOL does not include automatic installation verification as many commercial codes do, it does provide tools such as LiveLink™ for MATLAB® and the COMSOL API for use with Java® through which the user can automate the process. Here we present a successful automated verification example of a local COMSOL 5.0 installation for nuclear safety-related calculations at the Oakmore » Ridge National Laboratory’s High Flux Isotope Reactor (HFIR).« less

  8. On the Optimality of Serial and Parallel Processing in the Psychological Refractory Period Paradigm: Effects of the Distribution of Stimulus Onset Asynchronies

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf; Rolke, Bettina

    2009-01-01

    Within the context of the psychological refractory period (PRP) paradigm, we developed a general theoretical framework for deciding when it is more efficient to process two tasks in serial and when it is more efficient to process them in parallel. This analysis suggests that a serial mode is more efficient than a parallel mode under a wide variety…

  9. The role of parallelism in the real-time processing of anaphora.

    PubMed

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P

    2012-06-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.

  10. The role of parallelism in the real-time processing of anaphora

    PubMed Central

    Poirier, Josée; Walenski, Matthew; Shapiro, Lewis P.

    2012-01-01

    Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution. PMID:23741080

  11. Algorithms and programming tools for image processing on the MPP

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1985-01-01

    Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.

  12. 46 CFR 170.270 - Door design, operation, installation, and testing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Door design, operation, installation, and testing. 170..., operation, installation, and testing. (a) Each Class 1 door must have a quick action closing device... the welding process so that the door frame is not distorted. (e) For each watertight door which is in...

  13. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  14. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  15. Parallel adaptive wavelet collocation method for PDEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com; Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu; Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allowsmore » fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.« less

  16. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  17. Adaption of a parallel-path poly(tetrafluoroethylene) nebulizer to an evaporative light scattering detector: Optimization and application to studies of poly(dimethylsiloxane) oligomers as a model polymer.

    PubMed

    Durner, Bernhard; Ehmann, Thomas; Matysik, Frank-Michael

    2018-06-05

    The adaption of an parallel-path poly(tetrafluoroethylene)(PTFE) ICP-nebulizer to an evaporative light scattering detector (ELSD) was realized. This was done by substituting the originally installed concentric glass nebulizer of the ELSD. The performance of both nebulizers was compared regarding nebulizer temperature, evaporator temperature, flow rate of nebulizing gas and flow rate of mobile phase of different solvents using caffeine and poly(dimethylsiloxane) (PDMS) as analytes. Both nebulizers showed similar performances but for the parallel-path PTFE nebulizer the performance was considerably better at low LC flow rates and the nebulizer lifetime was substantially increased. In general, for both nebulizers the highest sensitivity was obtained by applying the lowest possible evaporator temperature in combination with the highest possible nebulizer temperature at preferably low gas flow rates. Besides the optimization of detector parameters, response factors for various PDMS oligomers were determined and the dependency of the detector signal on molar mass of the analytes was studied. The significant improvement regarding long-term stability made the modified ELSD much more robust and saved time and money by reducing the maintenance efforts. Thus, especially in polymer HPLC, associated with a complex matrix situation, the PTFE-based parallel-path nebulizer exhibits attractive characteristics for analytical studies of polymers. Copyright © 2018. Published by Elsevier B.V.

  18. STS-112 final main engine is installed after welding/polishing process

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. -- The last engine is installed in orbiter Atlantis after a welding and polishing process was undertaken on flow liners where cracks were detected. All engines were removed for inspection of flow liners. Atlantis will next fly on mission STS-112, scheduled for launch no earlier than Oct. 2.

  19. Hypergraph partitioning implementation for parallelizing matrix-vector multiplication using CUDA GPU-based parallel computing

    NASA Astrophysics Data System (ADS)

    Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.

    2017-07-01

    Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).

  20. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  1. NLSEmagic: Nonlinear Schrödinger equation multi-dimensional Matlab-based GPU-accelerated integrators using compact high-order schemes

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.

    2013-04-01

    We present a simple to use, yet powerful code package called NLSEmagic to numerically integrate the nonlinear Schrödinger equation in one, two, and three dimensions. NLSEmagic is a high-order finite-difference code package which utilizes graphic processing unit (GPU) parallel architectures. The codes running on the GPU are many times faster than their serial counterparts, and are much cheaper to run than on standard parallel clusters. The codes are developed with usability and portability in mind, and therefore are written to interface with MATLAB utilizing custom GPU-enabled C codes with the MEX-compiler interface. The packages are freely distributed, including user manuals and set-up files. Catalogue identifier: AEOJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 124453 No. of bytes in distributed program, including test data, etc.: 4728604 Distribution format: tar.gz Programming language: C, CUDA, MATLAB. Computer: PC, MAC. Operating system: Windows, MacOS, Linux. Has the code been vectorized or parallelized?: Yes. Number of processors used: Single CPU, number of GPU processors dependent on chosen GPU card (max is currently 3072 cores on GeForce GTX 690). Supplementary material: Setup guide, Installation guide. RAM: Highly dependent on dimensionality and grid size. For typical medium-large problem size in three dimensions, 4GB is sufficient. Keywords: Nonlinear Schröodinger Equation, GPU, high-order finite difference, Bose-Einstien condensates. Classification: 4.3, 7.7. Nature of problem: Integrate solutions of the time-dependent one-, two-, and three-dimensional cubic nonlinear Schrödinger equation. Solution method: The integrators utilize a fully-explicit fourth-order Runge-Kutta scheme in time and both second- and fourth-order differencing in space. The integrators are written to run on NVIDIA GPUs and are interfaced with MATLAB including built-in visualization and analysis tools. Restrictions: The main restriction for the GPU integrators is the amount of RAM on the GPU as the code is currently only designed for running on a single GPU. Unusual features: Ability to visualize real-time simulations through the interaction of MATLAB and the compiled GPU integrators. Additional comments: Setup guide and Installation guide provided. Program has a dedicated web site at www.nlsemagic.com. Running time: A three-dimensional run with a grid dimension of 87×87×203 for 3360 time steps (100 non-dimensional time units) takes about one and a half minutes on a GeForce GTX 580 GPU card.

  2. Installation Mapping Enables Many Missions: The Benefits of and Barriers to Sharing Geospatial Data Assets

    DTIC Science & Technology

    2007-01-01

    software applications and rely on the installations to supply them with the basic I&E geospatial data - sets for those applications. Such...spatial data in geospatially based tools to help track military supplies and materials all over the world. For instance, SDDCTEA developed IRRIS, a...regional offices or individual installations to supply the data and perform QA/QC in the process. The IVT program office worked with the installations and

  3. Parallel and serial grouping of image elements in visual perception.

    PubMed

    Houtkamp, Roos; Roelfsema, Pieter R

    2010-12-01

    The visual system groups image elements that belong to an object and segregates them from other objects and the background. Important cues for this grouping process are the Gestalt criteria, and most theories propose that these are applied in parallel across the visual scene. Here, we find that Gestalt grouping can indeed occur in parallel in some situations, but we demonstrate that there are also situations where Gestalt grouping becomes serial. We observe substantial time delays when image elements have to be grouped indirectly through a chain of local groupings. We call this chaining process incremental grouping and demonstrate that it can occur for only a single object at a time. We suggest that incremental grouping requires the gradual spread of object-based attention so that eventually all the object's parts become grouped explicitly by an attentional labeling process. Our findings inspire a new incremental grouping theory that relates the parallel, local grouping process to feedforward processing and the serial, incremental grouping process to recurrent processing in the visual cortex.

  4. Aging and feature search: the effect of search area.

    PubMed

    Burton-Danner, K; Owsley, C; Jackson, G R

    2001-01-01

    The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.

  5. Interdisciplinary Research and Phenomenology as Parallel Processes of Consciousness

    ERIC Educational Resources Information Center

    Arvidson, P. Sven

    2016-01-01

    There are significant parallels between interdisciplinarity and phenomenology. Interdisciplinary conscious processes involve identifying relevant disciplines, evaluating each disciplinary insight, and creating common ground. In an analogous way, phenomenology involves conscious processes of epoché, reduction, and eidetic variation. Each stresses…

  6. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for themore » context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.« less

  7. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  8. Parallel computation with the force

    NASA Technical Reports Server (NTRS)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  9. GASPRNG: GPU accelerated scalable parallel random number generator library

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or workstation with NVIDIA GPU (Tested on Fermi GTX480, Tesla C1060, Tesla M2070). Operating system: Linux with CUDA version 4.0 or later. Should also run on MacOS, Windows, or UNIX. Has the code been vectorized or parallelized?: Yes. Parallelized using MPI directives. RAM: 512 MB˜ 732 MB (main memory on host CPU, depending on the data type of random numbers.) / 512 MB (GPU global memory) Classification: 4.13, 6.5. Nature of problem: Many computational science applications are able to consume large numbers of random numbers. For example, Monte Carlo simulations are able to consume limitless random numbers for the computation as long as resources for the computing are supported. Moreover, parallel computational science applications require independent streams of random numbers to attain statistically significant results. The SPRNG library provides this capability, but at a significant computational cost. The GASPRNG library presented here accelerates the generators of independent streams of random numbers using graphical processing units (GPUs). Solution method: Multiple copies of random number generators in GPUs allow a computational science application to consume large numbers of random numbers from independent, parallel streams. GASPRNG is a random number generators library to allow a computational science application to employ multiple copies of random number generators to boost performance. Users can interface GASPRNG with software code executing on microprocessors and/or GPUs. Running time: The tests provided take a few minutes to run.

  10. Effects of ATC automation on precision approaches to closely space parallel runways

    NASA Technical Reports Server (NTRS)

    Slattery, R.; Lee, K.; Sanford, B.

    1995-01-01

    Improved navigational technology (such as the Microwave Landing System and the Global Positioning System) installed in modern aircraft will enable air traffic controllers to better utilize available airspace. Consequently, arrival traffic can fly approaches to parallel runways separated by smaller distances than are currently allowed. Previous simulation studies of advanced navigation approaches have found that controller workload is increased when there is a combination of aircraft that are capable of following advanced navigation routes and aircraft that are not. Research into Air Traffic Control automation at Ames Research Center has led to the development of the Center-TRACON Automation System (CTAS). The Final Approach Spacing Tool (FAST) is the component of the CTAS used in the TRACON area. The work in this paper examines, via simulation, the effects of FAST used for aircraft landing on closely spaced parallel runways. The simulation contained various combinations of aircraft, equipped and unequipped with advanced navigation systems. A set of simulations was run both manually and with an augmented set of FAST advisories to sequence aircraft, assign runways, and avoid conflicts. The results of the simulations are analyzed, measuring the airport throughput, aircraft delay, loss of separation, and controller workload.

  11. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    PubMed

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  12. Computer-Aided Parallelizer and Optimizer

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meisner, L. L., E-mail: llm@ispms.tsc.ru; Meisner, S. N., E-mail: msn@ispms.tsc.ru; National Research Tomsk State University, Tomsk, 634050

    This work comprises a study of the influence of the pulse number of low-energy high-current electron beam (LEHCEB) exposure on the value and character of distribution of residual elastic stresses, texturing effects and the relationship between structural-phase states and physical and mechanical properties of the modified surface layers of TiNi alloy. LEHCEB processing of the surface of TiNi samples was carried out using a RITM-SP [3] installation. Energy density of electron beam was constant at E{sub s} = 3.9 ± 0.5 J/cm{sup 2}; pulse duration was 2.8 ± 0.3 μs. The number of pulses in the series was changeable, (n =more » 2–128). It was shown that as the result of multiple LEHCEB processing of TiNi samples, hierarchically organized multilayer structure is formed in the surface layer. The residual stress field of planar type is formed in the modified surface layer as following: in the direction of the normal to the surface the strain component ε{sub ⊥} < 0 (compressing strain), and in a direction parallel to the surface, the strain component ε{sub ||} > 0 (tensile deformation). Texturing effects and the level of residual stresses after LEHCEB processing of TiNi samples with equal energy density of electron beam (∼3.8 J/cm{sup 2}) depend on the number of pulses and increase with the rise of n > 10.« less

  14. Next Generation Loading System for Detonators and Primers

    DTIC Science & Technology

    Designed , fabricated and installed next generation tooling to provide additional manufacturing capabilities for new detonators and other small...prototype munitions on automated, semi-automated and manual machines. Lead design effort, procured and installed a primary explosive Drying Oven for a pilot...facility. Designed , fabricated and installed a Primary Explosives Waste Treatment System in a pilot environmental processing facility. Designed

  15. Field Trial of an Aerosol-Based Enclosure Sealing Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrington, Curtis; Springer, David

    2015-09-01

    This report presents the results from several demonstrations of a new method for sealing building envelope air leaks using an aerosol sealing process developed by the Western Cooling Efficiency Center at UC Davis. The process involves pressurizing a building while applying an aerosol sealant to the interior. As air escapes through leaks in the building envelope, the aerosol particles are transported to the leaks where they collect and form a seal that blocks the leak. Standard blower door technology is used to facilitate the building pressurization, which allows the installer to track the sealing progress during the installation and automaticallymore » verify the final building tightness. Each aerosol envelope sealing installation was performed after drywall was installed and taped, and the process did not appear to interrupt the construction schedule or interfere with other trades working in the homes. The labor needed to physically seal bulk air leaks in typical construction will not be replaced by this technology.« less

  16. Powered orthosis and attachable power-assist device with Hydraulic Bilateral Servo System.

    PubMed

    Ohnishi, Kengo; Saito, Yukio; Oshima, Toru; Higashihara, Takanori

    2013-01-01

    This paper discusses the developments and control strategies of exoskeleton-type robot systems for the application of an upper limb powered orthosis and an attachable power-assist device for care-givers. Hydraulic Bilateral Servo System, which consist of a computer controlled motor, parallel connected hydraulic actuators, position sensors, and pressure sensors, are installed in the system to derive the joint motion of the exoskeleton arm. The types of hydraulic component structure and the control strategy are discussed in relation to the design philosophy and target joints motions.

  17. Lattice QCD calculation using VPP500

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seyong; Ohta, Shigemi

    1995-02-01

    A new vector parallel supercomputer, Fujitsu VPP500, was installed at RIKEN earlier this year. It consists of 30 vector computers, each with 1.6 GFLOPS peak speed and 256 MB memory, connected by a crossbar switch with 400 MB/s peak data transfer rate each way between any pair of nodes. The authors developed a Fortran lattice QCD simulation code for it. It runs at about 1.1 GFLOPS sustained per node for Metropolis pure-gauge update, and about 0.8 GFLOPS sustained per node for conjugate gradient inversion of staggered fermion matrix.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damiani, D.; Dubrovin, M.; Gaponenko, I.

    Psana(Photon Science Analysis) is a software package that is used to analyze data produced by the Linac Coherent Light Source X-ray free-electron laser at the SLAC National Accelerator Laboratory. The project began in 2011, is written primarily in C++ with some Python, and provides user interfaces in both C++ and Python. Most users use the Python interface. The same code can be run in real time while data are being taken as well as offline, executing on many nodes/cores using MPI for parallelization. It is publicly available and installable on the RHEL5/6/7 operating systems.

  19. Method for making a hot wire anemometer and product thereof

    NASA Technical Reports Server (NTRS)

    Milkulla, V. (Inventor)

    1977-01-01

    A hot wire anemometer probe is described that includes a ceramic body supporting two conductive rods parallel to each other. The body has a narrow edge surface from which the rods protrude. A probe wire is welded to the rods and extends along the edge surface. A ceramic adhesive is used to secure the probe wire to the surface so that the probe wire is rigid. A method for fabricating the probe is also described in which the body is molded and precisely shaped by machining techniques before the probe wires are installed.

  20. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Solomon, Jeffrey Michael (Inventor); Ghuman, Parminder Singh (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  1. Agile Datacube Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Merticariu, Vlad; Baumann, Peter

    2017-04-01

    Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well. This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics. We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.

  2. Agile Datacube Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2016-12-01

    Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well.This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics.We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.

  3. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  4. STS-112 final main engine is installed after welding/polishing process

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - Workers get ready to install the last engine in orbiter Atlantis after a welding and polishing process was undertaken on flow liners where cracks were detected. All engines were removed for inspection of flow liners. Atlantis will next fly on mission STS-112, scheduled for launch no earlier than Oct. 2.

  5. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  6. Parallelization of ARC3D with Computer-Aided Tools

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; Hribar, Michelle; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    A series of efforts have been devoted to investigating methods of porting and parallelizing applications quickly and efficiently for new architectures, such as the SCSI Origin 2000 and Cray T3E. This report presents the parallelization of a CFD application, ARC3D, using the computer-aided tools, Cesspools. Steps of parallelizing this code and requirements of achieving better performance are discussed. The generated parallel version has achieved reasonably well performance, for example, having a speedup of 30 for 36 Cray T3E processors. However, this performance could not be obtained without modification of the original serial code. It is suggested that in many cases improving serial code and performing necessary code transformations are important parts for the automated parallelization process although user intervention in many of these parts are still necessary. Nevertheless, development and improvement of useful software tools, such as Cesspools, can help trim down many tedious parallelization details and improve the processing efficiency.

  7. Development of a parallel FE simulator for modeling the whole trans-scale failure process of rock from meso- to engineering-scale

    NASA Astrophysics Data System (ADS)

    Li, Gen; Tang, Chun-An; Liang, Zheng-Zhao

    2017-01-01

    Multi-scale high-resolution modeling of rock failure process is a powerful means in modern rock mechanics studies to reveal the complex failure mechanism and to evaluate engineering risks. However, multi-scale continuous modeling of rock, from deformation, damage to failure, has raised high requirements on the design, implementation scheme and computation capacity of the numerical software system. This study is aimed at developing the parallel finite element procedure, a parallel rock failure process analysis (RFPA) simulator that is capable of modeling the whole trans-scale failure process of rock. Based on the statistical meso-damage mechanical method, the RFPA simulator is able to construct heterogeneous rock models with multiple mechanical properties, deal with and represent the trans-scale propagation of cracks, in which the stress and strain fields are solved for the damage evolution analysis of representative volume element by the parallel finite element method (FEM) solver. This paper describes the theoretical basis of the approach and provides the details of the parallel implementation on a Windows - Linux interactive platform. A numerical model is built to test the parallel performance of FEM solver. Numerical simulations are then carried out on a laboratory-scale uniaxial compression test, and field-scale net fracture spacing and engineering-scale rock slope examples, respectively. The simulation results indicate that relatively high speedup and computation efficiency can be achieved by the parallel FEM solver with a reasonable boot process. In laboratory-scale simulation, the well-known physical phenomena, such as the macroscopic fracture pattern and stress-strain responses, can be reproduced. In field-scale simulation, the formation process of net fracture spacing from initiation, propagation to saturation can be revealed completely. In engineering-scale simulation, the whole progressive failure process of the rock slope can be well modeled. It is shown that the parallel FE simulator developed in this study is an efficient tool for modeling the whole trans-scale failure process of rock from meso- to engineering-scale.

  8. Measure Guideline: High Efficiency Natural Gas Furnaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brand, L.; Rose, W.

    2012-10-01

    This Measure Guideline covers installation of high-efficiency gas furnaces. Topics covered include when to install a high-efficiency gas furnace as a retrofit measure, how to identify and address risks, and the steps to be used in the selection and installation process. The guideline is written for Building America practitioners and HVAC contractors and installers. It includes a compilation of information provided by manufacturers, researchers, and the Department of Energy as well as recent research results from the Partnership for Advanced Residential Retrofit (PARR) Building America team.

  9. Measure Guideline. High Efficiency Natural Gas Furnaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brand, L.; Rose, W.

    2012-10-01

    This measure guideline covers installation of high-efficiency gas furnaces, including: when to install a high-efficiency gas furnace as a retrofit measure; how to identify and address risks; and the steps to be used in the selection and installation process. The guideline is written for Building America practitioners and HVAC contractors and installers. It includes a compilation of information provided by manufacturers, researchers, and the Department of Energy as well as recent research results from the Partnership for Advanced Residential Retrofit (PARR) Building America team.

  10. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce.

    PubMed

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications.

  11. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce

    PubMed Central

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D.; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S.

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications. PMID:25852536

  12. Curious parallels and curious connections--phylogenetic thinking in biology and historical linguistics.

    PubMed

    Atkinson, Quentin D; Gray, Russell D

    2005-08-01

    In The Descent of Man (1871), Darwin observed "curious parallels" between the processes of biological and linguistic evolution. These parallels mean that evolutionary biologists and historical linguists seek answers to similar questions and face similar problems. As a result, the theory and methodology of the two disciplines have evolved in remarkably similar ways. In addition to Darwin's curious parallels of process, there are a number of equally curious parallels and connections between the development of methods in biology and historical linguistics. Here we briefly review the parallels between biological and linguistic evolution and contrast the historical development of phylogenetic methods in the two disciplines. We then look at a number of recent studies that have applied phylogenetic methods to language data and outline some current problems shared by the two fields.

  13. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  14. Performing a local reduction operation on a parallel computer

    DOEpatents

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  15. Logistics Process Analysis ToolProcess Analysis Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2008-03-31

    LPAT is the resulting integrated system between ANL-developed Enhanced Logistics Intra Theater Support Tool (ELIST) sponsored by SDDC-TEA and the Fort Future Virtual Installation Tool (sponsored by CERL). The Fort Future Simulation Engine was an application written in the ANL Repast Simphony framework and used as the basis for the process Anlysis Tool (PAT) which evolved into a stand=-along tool for detailed process analysis at a location. Combined with ELIST, an inter-installation logistics component was added to enable users to define large logistical agent-based models without having to program. PAT is the evolution of an ANL-developed software system called Fortmore » Future Virtual Installation Tool (sponsored by CERL). The Fort Future Simulation Engine was an application written in the ANL Repast Simphony framework and used as the basis for the Process Analysis Tool(PAT) which evolved into a stand-alone tool for detailed process analysis at a location (sponsored by the SDDC-TEA).« less

  16. Parallel pivoting combined with parallel reduction

    NASA Technical Reports Server (NTRS)

    Alaghband, Gita

    1987-01-01

    Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

  17. Improving operating room productivity via parallel anesthesia processing.

    PubMed

    Brown, Michael J; Subramanian, Arun; Curry, Timothy B; Kor, Daryl J; Moran, Steven L; Rohleder, Thomas R

    2014-01-01

    Parallel processing of regional anesthesia may improve operating room (OR) efficiency in patients undergoes upper extremity surgical procedures. The purpose of this paper is to evaluate whether performing regional anesthesia outside the OR in parallel increases total cases per day, improve efficiency and productivity. Data from all adult patients who underwent regional anesthesia as their primary anesthetic for upper extremity surgery over a one-year period were used to develop a simulation model. The model evaluated pure operating modes of regional anesthesia performed within and outside the OR in a parallel manner. The scenarios were used to evaluate how many surgeries could be completed in a standard work day (555 minutes) and assuming a standard three cases per day, what was the predicted end-of-day time overtime. Modeling results show that parallel processing of regional anesthesia increases the average cases per day for all surgeons included in the study. The average increase was 0.42 surgeries per day. Where it was assumed that three cases per day would be performed by all surgeons, the days going to overtime was reduced by 43 percent with parallel block. The overtime with parallel anesthesia was also projected to be 40 minutes less per day per surgeon. Key limitations include the assumption that all cases used regional anesthesia in the comparisons. Many days may have both regional and general anesthesia. Also, as a case study, single-center research may limit generalizability. Perioperative care providers should consider parallel administration of regional anesthesia where there is a desire to increase daily upper extremity surgical case capacity. Where there are sufficient resources to do parallel anesthesia processing, efficiency and productivity can be significantly improved. Simulation modeling can be an effective tool to show practice change effects at a system-wide level.

  18. Parallel, Asynchronous Executive (PAX): System concepts, facilities, and architecture

    NASA Technical Reports Server (NTRS)

    Jones, W. H.

    1983-01-01

    The Parallel, Asynchronous Executive (PAX) is a software operating system simulation that allows many computers to work on a single problem at the same time. PAX is currently implemented on a UNIVAC 1100/42 computer system. Independent UNIVAC runstreams are used to simulate independent computers. Data are shared among independent UNIVAC runstreams through shared mass-storage files. PAX has achieved the following: (1) applied several computing processes simultaneously to a single, logically unified problem; (2) resolved most parallel processor conflicts by careful work assignment; (3) resolved by means of worker requests to PAX all conflicts not resolved by work assignment; (4) provided fault isolation and recovery mechanisms to meet the problems of an actual parallel, asynchronous processing machine. Additionally, one real-life problem has been constructed for the PAX environment. This is CASPER, a collection of aerodynamic and structural dynamic problem simulation routines. CASPER is not discussed in this report except to provide examples of parallel-processing techniques.

  19. Enhancements to TauDEM to support Rapid Watershed Delineation Services

    NASA Astrophysics Data System (ADS)

    Sazib, N. S.; Tarboton, D. G.

    2015-12-01

    Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.

  20. Ecohydrologic coevolution in drylands: relative roles of vegetation, soil depth and runoff connectivity on ecosystem shifts.

    NASA Astrophysics Data System (ADS)

    Saco, P. M.; Moreno de las Heras, M.; Willgoose, G. R.

    2014-12-01

    Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.

  1. Multi-Column Experimental Test Bed Using CaSDB MOF for Xe/Kr Separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welty, Amy Keil; Greenhalgh, Mitchell Randy; Garn, Troy Gerry

    Processing of spent nuclear fuel produces off-gas from which several volatile radioactive components must be separated for further treatment or storage. As part of the Off-gas Sigma Team, parallel research at INL and PNNL has produced several promising sorbents for the selective capture of xenon and krypton from these off-gas streams. In order to design full-scale treatment systems, sorbents that are promising on a laboratory scale must be proven under process conditions to be considered for pilot and then full-scale use. To that end, a bench-scale multi-column system with capability to test multiple sorbents was designed and constructed at INL.more » This report details bench-scale testing of CaSDB MOF, produced at PNNL, and compares the results to those reported last year using INL engineered sorbents. Two multi-column tests were performed with the CaSDB MOF installed in the first column, followed with HZ-PAN installed in the second column. The CaSDB MOF column was placed in a Stirling cryocooler while the cryostat was employed for the HZ-PAN column. Test temperatures of 253 K and 191 K were selected for the first column while the second column was held at 191 K for both tests. Calibrated volume sample bombs were utilized for gas stream analyses. At the conclusion of each test, samples were collected from each column and analyzed for gas composition. While CaSDB MOF does appear to have good capacity for Xe, the short time to initial breakthrough would make design of a continuous adsorption/desorption cycle difficult, requiring either very large columns or a large number of smaller columns. Because of the tenacity with which Xe and Kr adhere to the material once adsorbed, this CaSDB MOF may be more suitable for use as a long-term storage solution. Further testing is recommended to determine if CaSDB MOF is suitable for this purpose.« less

  2. Impact of a wastewater treatment plant on microbial community composition and function in a hyporheic zone of a eutrophic river

    NASA Astrophysics Data System (ADS)

    Atashgahi, Siavash; Aydin, Rozelin; Dimitrov, Mauricio R.; Sipkema, Detmer; Hamonts, Kelly; Lahti, Leo; Maphosa, Farai; Kruse, Thomas; Saccenti, Edoardo; Springael, Dirk; Dejonghe, Winnie; Smidt, Hauke

    2015-11-01

    The impact of the installation of a technologically advanced wastewater treatment plant (WWTP) on the benthic microbial community of a vinyl chloride (VC) impacted eutrophic river was examined two years before, and three and four years after installation of the WWTP. Reduced dissolved organic carbon and increased dissolved oxygen concentrations in surface water and reduced total organic carbon and total nitrogen content in the sediment were recorded in the post-WWTP samples. Pyrosequencing of bacterial 16S rRNA gene fragments in sediment cores showed reduced relative abundance of heterotrophs and fermenters such as Chloroflexi and Firmicutes in more oxic and nutrient poor post-WWTP sediments. Similarly, quantitative PCR analysis showed 1-3 orders of magnitude reduction in phylogenetic and functional genes of sulphate reducers, denitrifiers, ammonium oxidizers, methanogens and VC-respiring Dehalococcoides mccartyi. In contrast, members of Proteobacteria adapted to nutrient-poor conditions were enriched in post-WWTP samples. This transition in the trophic state of the hyporheic sediments reduced but did not abolish the VC respiration potential in the post-WWTP sediments as an important hyporheic sediment function. Our results highlight effective nutrient load reduction and parallel microbial ecological state restoration of a human-stressed urban river as a result of installation of a WWTP.

  3. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  4. Statistical process control charts for monitoring military injuries.

    PubMed

    Schuh, Anna; Canham-Chervak, Michelle; Jones, Bruce H

    2017-12-01

    An essential aspect of an injury prevention process is surveillance, which quantifies and documents injury rates in populations of interest and enables monitoring of injury frequencies, rates and trends. To drive progress towards injury reduction goals, additional tools are needed. Statistical process control charts, a methodology that has not been previously applied to Army injury monitoring, capitalise on existing medical surveillance data to provide information to leadership about injury trends necessary for prevention planning and evaluation. Statistical process control Shewhart u-charts were created for 49 US Army installations using quarterly injury medical encounter rates, 2007-2015, for active duty soldiers obtained from the Defense Medical Surveillance System. Injuries were defined according to established military injury surveillance recommendations. Charts display control limits three standard deviations (SDs) above and below an installation-specific historical average rate determined using 28 data points, 2007-2013. Charts are available in Army strategic management dashboards. From 2007 to 2015, Army injury rates ranged from 1254 to 1494 unique injuries per 1000 person-years. Installation injury rates ranged from 610 to 2312 injuries per 1000 person-years. Control charts identified four installations with injury rates exceeding the upper control limits at least once during 2014-2015, rates at three installations exceeded the lower control limit at least once and 42 installations had rates that fluctuated around the historical mean. Control charts can be used to drive progress towards injury reduction goals by indicating statistically significant increases and decreases in injury rates. Future applications to military subpopulations, other health outcome metrics and chart enhancements are suggested. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  5. Installing a Carrier Panel on Endeavor in OPF 2

    NASA Image and Video Library

    2007-01-19

    In Orbiter Processing Facility bay 2, technicians Jesus Rodrigues (left) and James Johnson install a leading edge subsystem carrier panel on the right wing of Endeavour. The orbiter is scheduled for mission STS-118, targeted for launch on June 28. The mission will be the 22nd flight to the International Space Station, carrying another starboard array, S5, for installation.

  6. Installing a Carrier Panel on Endeavor in OPF 2

    NASA Image and Video Library

    2007-01-19

    In Orbiter Processing Facility bay 2, technicians James Johnson (left) and Jesus Rodrigues install a leading edge subsystem carrier panel on the right wing of Endeavour. The orbiter is scheduled for mission STS-118, targeted for launch on June 28. The mission will be the 22nd flight to the International Space Station, carrying another starboard array, S5, for installation.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overend, R.P.; Rivard, C.J.

    Gasification is being developed to enable a diverse range of biomass resources to meet modern secondary energy uses, especially in the electrical utility sector. Biological or anaerobic gasification in US landfills has resulted in the installation of almost 500 MW(e) of capacity and represents the largest scale application of gasification technology today. The development of integrated gasification combined cycle generation for coal technologies is being paralleled by bagasse and wood thermal gasification systems in Hawaii and Scandinavia, and will lead to significant deployment in the next decade as the current scale-up activities are commercialized. The advantages of highly reactive biomassmore » over coal in the design of process units are being realized as new thermal gasifiers are being scaled up to produce medium-energy-content gas for conversion to synthetic natural gas and transportation fuels and to hydrogen for use in fuel cells. The advent of high solids anaerobic digestion reactors is leading to commercialization of controlled municipal solid waste biological gasification rather than landfill application. In both thermal and biological gasification, high rate process reactors are a necessary development for economic applications that address waste and residue management and the production and use of new crops for energy. The environmental contribution of biomass in reducing greenhouse gas emission will also be improved.« less

  8. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  9. Parallels between a Collaborative Research Process and the Middle Level Philosophy

    ERIC Educational Resources Information Center

    Dever, Robin; Ross, Diane; Miller, Jennifer; White, Paula; Jones, Karen

    2014-01-01

    The characteristics of the middle level philosophy as described in This We Believe closely parallel the collaborative research process. The journey of one research team is described in relationship to these characteristics. The collaborative process includes strengths such as professional relationships, professional development, courageous…

  10. Removal of suspended solids and turbidity from marble processing wastewaters by electrocoagulation: comparison of electrode materials and electrode connection systems.

    PubMed

    Solak, Murat; Kiliç, Mehmet; Hüseyin, Yazici; Sencan, Aziz

    2009-12-15

    In this study, removal of suspended solids (SS) and turbidity from marble processing wastewaters by electrocoagulation (EC) process were investigated by using aluminium (Al) and iron (Fe) electrodes which were run in serial and parallel connection systems. To remove these pollutants from the marble processing wastewater, an EC reactor including monopolar electrodes (Al/Fe) in parallel and serial connection system, was utilized. Optimization of differential operation parameters such as pH, current density, and electrolysis time on SS and turbidity removal were determined in this way. EC process with monopolar Al electrodes in parallel and serial connections carried out at the optimum conditions where the pH value was 9, current density was approximately 15 A/m(2), and electrolysis time was 2 min resulted in 100% SS removal. Removal efficiencies of EC process for SS with monopolar Fe electrodes in parallel and serial connection were found to be 99.86% and 99.94%, respectively. Optimum parameters for monopolar Fe electrodes in both of the connection types were found to be for pH value as 8, for electrolysis time as 2 min. The optimum current density value for Fe electrodes used in serial and parallel connections was also obtained at 10 and 20 A/m(2), respectively. Based on the results obtained, it was found that EC process running with each type of the electrodes and the connections was highly effective for the removal of SS and turbidity from marble processing wastewaters, and that operating costs with monopolar Al electrodes in parallel connection were the cheapest than that of the serial connection and all the configurations for Fe electrode.

  11. Stress and decision making: neural correlates of the interaction between stress, executive functions, and decision making under risk.

    PubMed

    Gathmann, Bettina; Schulte, Frank P; Maderwald, Stefan; Pawlikowski, Mirko; Starcke, Katrin; Schäfer, Lena C; Schöler, Tobias; Wolf, Oliver T; Brand, Matthias

    2014-03-01

    Stress and additional load on the executive system, produced by a parallel working memory task, impair decision making under risk. However, the combination of stress and a parallel task seems to preserve the decision-making performance [e.g., operationalized by the Game of Dice Task (GDT)] from decreasing, probably by a switch from serial to parallel processing. The question remains how the brain manages such demanding decision-making situations. The current study used a 7-tesla magnetic resonance imaging (MRI) system in order to investigate the underlying neural correlates of the interaction between stress (induced by the Trier Social Stress Test), risky decision making (GDT), and a parallel executive task (2-back task) to get a better understanding of those behavioral findings. The results show that on a behavioral level, stressed participants did not show significant differences in task performance. Interestingly, when comparing the stress group (SG) with the control group, the SG showed a greater increase in neural activation in the anterior prefrontal cortex when performing the 2-back task simultaneously with the GDT than when performing each task alone. This brain area is associated with parallel processing. Thus, the results may suggest that in stressful dual-tasking situations, where a decision has to be made when in parallel working memory is demanded, a stronger activation of a brain area associated with parallel processing takes place. The findings are in line with the idea that stress seems to trigger a switch from serial to parallel processing in demanding dual-tasking situations.

  12. STS-112 final main engine is installed after welding/polishing process

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. -- Workers on the engine lift get ready to install the last engine in orbiter Atlantis after a welding and polishing process was undertaken on flow liners where cracks were detected. All engines were removed for inspection of flow liners. Atlantis will next fly on mission STS-112, scheduled for launch no earlier than Oct. 2.

  13. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property

    DTIC Science & Technology

    2007-01-01

    substances released after 1986 and munitions released after 2002 are not eligible for DERP funds. These cleanups are generally referred to as non -DERP...relocating functions from one installation to...requirements during the process of property disposal and during the process of relocating functions from one installation to another. The National

  14. Study and development of an air conditioning system operating on a magnetic heat pump cycle (design and testing of flow directors)

    NASA Astrophysics Data System (ADS)

    Wang, Pao-Lien

    1992-09-01

    This report describes the fabrication, design of flow director, fluid flow direction analysis and testing of flow director of a magnetic heat pump. The objectives of the project are: (1) to fabricate a demonstration magnetic heat pump prototype with flow directors installed; and (2) analysis and testing of flow director and to make sure working fluid loops flow through correct directions with minor mixing. The prototype was fabricated and tested at the Development Testing Laboratory of Kennedy Space Center. The magnetic heat pump uses rear earth metal plates rotate in and out of a magnetic field in a clear plastic housing with water flowing through the rotor plates to provide temperature lift. Obtaining the proper water flow direction has been a problem. Flow directors were installed as flow barriers between separating point of two parallel loops. Function of flow directors were proven to be excellent both analytically and experimentally.

  15. Study and development of an air conditioning system operating on a magnetic heat pump cycle (design and testing of flow directors)

    NASA Technical Reports Server (NTRS)

    Wang, Pao-Lien

    1992-01-01

    This report describes the fabrication, design of flow director, fluid flow direction analysis and testing of flow director of a magnetic heat pump. The objectives of the project are: (1) to fabricate a demonstration magnetic heat pump prototype with flow directors installed; and (2) analysis and testing of flow director and to make sure working fluid loops flow through correct directions with minor mixing. The prototype was fabricated and tested at the Development Testing Laboratory of Kennedy Space Center. The magnetic heat pump uses rear earth metal plates rotate in and out of a magnetic field in a clear plastic housing with water flowing through the rotor plates to provide temperature lift. Obtaining the proper water flow direction has been a problem. Flow directors were installed as flow barriers between separating point of two parallel loops. Function of flow directors were proven to be excellent both analytically and experimentally.

  16. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  17. Improved analytical approximation to arbitrary l-state solutions of the Schrödinger equation for the hyperbolical potentials

    NASA Astrophysics Data System (ADS)

    Sánchez López, Elena H.

    2018-04-01

    Water has been traditionally highlighted (together with fish and salt) as one of the essential elements in fish processing. Indeed, the need for large quantities of fresh water for the production of salted fish and fish sauces in Roman times is commonly asserted. This paper analyses water-related structures within Roman halieutic installations, arguing that their common presence in the best known fish processing installations in the Western Roman world should be taken as evidence of the use of fresh water during the production processes, even if its role in the activities carried out in those installations is not clear. In addition, the text proposes some first estimates on the amount of water that could be needed by those fish processing complexes for their functioning, concluding that water needs to be taken into account when reconstructing fish-salting recipes.

  18. ANL statement of site strategy for computing workstations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenske, K.R.; Boxberger, L.M.; Amiot, L.W.

    1991-11-01

    This Statement of Site Strategy describes the procedure at Argonne National Laboratory for defining, acquiring, using, and evaluating scientific and office workstations and related equipment and software in accord with DOE Order 1360.1A (5-30-85), and Laboratory policy. It is Laboratory policy to promote the installation and use of computing workstations to improve productivity and communications for both programmatic and support personnel, to ensure that computing workstations acquisitions meet the expressed need in a cost-effective manner, and to ensure that acquisitions of computing workstations are in accord with Laboratory and DOE policies. The overall computing site strategy at ANL is tomore » develop a hierarchy of integrated computing system resources to address the current and future computing needs of the laboratory. The major system components of this hierarchical strategy are: Supercomputers, Parallel computers, Centralized general purpose computers, Distributed multipurpose minicomputers, and Computing workstations and office automation support systems. Computing workstations include personal computers, scientific and engineering workstations, computer terminals, microcomputers, word processing and office automation electronic workstations, and associated software and peripheral devices costing less than $25,000 per item.« less

  19. Rubus: A compiler for seamless and extensible parallelism.

    PubMed

    Adnan, Muhammad; Aslam, Faisal; Nawaz, Zubair; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer's expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program.

  20. Rubus: A compiler for seamless and extensible parallelism

    PubMed Central

    Adnan, Muhammad; Aslam, Faisal; Sarwar, Syed Mansoor

    2017-01-01

    Nowadays, a typical processor may have multiple processing cores on a single chip. Furthermore, a special purpose processing unit called Graphic Processing Unit (GPU), originally designed for 2D/3D games, is now available for general purpose use in computers and mobile devices. However, the traditional programming languages which were designed to work with machines having single core CPUs, cannot utilize the parallelism available on multi-core processors efficiently. Therefore, to exploit the extraordinary processing power of multi-core processors, researchers are working on new tools and techniques to facilitate parallel programming. To this end, languages like CUDA and OpenCL have been introduced, which can be used to write code with parallelism. The main shortcoming of these languages is that programmer needs to specify all the complex details manually in order to parallelize the code across multiple cores. Therefore, the code written in these languages is difficult to understand, debug and maintain. Furthermore, to parallelize legacy code can require rewriting a significant portion of code in CUDA or OpenCL, which can consume significant time and resources. Thus, the amount of parallelism achieved is proportional to the skills of the programmer and the time spent in code optimizations. This paper proposes a new open source compiler, Rubus, to achieve seamless parallelism. The Rubus compiler relieves the programmer from manually specifying the low-level details. It analyses and transforms a sequential program into a parallel program automatically, without any user intervention. This achieves massive speedup and better utilization of the underlying hardware without a programmer’s expertise in parallel programming. For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores. Whereas, for a matrix multiplication benchmark the average execution speedup of 84 times has been achieved by Rubus on the same GPU. Moreover, Rubus achieves this performance without drastically increasing the memory footprint of a program. PMID:29211758

  1. Support for Debugging Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  2. Relative Debugging of Automatically Parallelized Programs

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)

    2002-01-01

    We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.

  3. High speed infrared imaging system and method

    DOEpatents

    Zehnder, Alan T.; Rosakis, Ares J.; Ravichandran, G.

    2001-01-01

    A system and method for radiation detection with an increased frame rate. A semi-parallel processing configuration is used to process a row or column of pixels in a focal-plane array in parallel to achieve a processing rate up to and greater than 1 million frames per second.

  4. Idle waves in high-performance computing

    NASA Astrophysics Data System (ADS)

    Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre

    2015-01-01

    The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.

  5. The science of computing - Parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1985-01-01

    Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.

  6. Expressing Parallelism with ROOT

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  7. Expressing Parallelism with ROOT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piparo, D.; Tejedor, E.; Guiraud, E.

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module inmore » Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.« less

  8. Identical Aftershocks from the Main Rupture Zone 10 Months After the Mw=7.6 September 5, 2012, Nicoya, Costa Rica, Earthquake

    NASA Astrophysics Data System (ADS)

    Protti, M.; Alfaro-Diaz, R.; Brenn, G. R.; Fasola, S.; Murillo, A.; Marshall, J. S.; Gardner, T. W.

    2013-12-01

    Over a two weeks period and as part of a Keck Geology Consortium summer research project, we installed a dense broad band seismic array directly over the rupture zone of the Nicoya, September 5th, 2012, Mw=7.6 earthquake. The network consisted of 5 Trillium compact seismometers and Taurus digitizers from Nanometrics, defining a triangular area of ~20 km per side. Also located within this area are 3 stations of the Nicoya permanent broadband network. One side of the triangular area, along the west coast of the Nicoya peninsula, is parallel to the trench and the apex lies 15 km landward. The plate interface and rupture zone of the Nicoya 2012 earthquake are located 16 km below the trench-parallel side and 25 km below the apex of this triangular footprint. Station spacing ranged from 3 to 14 km. This dense array operated from July 2nd to July 17th, 2013. On June 23rd, eight days before we installed this array, an Mw=5.4 aftershock (one of the only 5 aftershocks of the Nicoya Mw=7.6 earthquake with magnitudes above 5.0) occurred directly beneath the area of our temporary network. Preliminary analysis of the data shows that we recorded several identical aftershocks with magnitudes below 1.0 that locate some 18 km below our network. We will present detailed locations of these small aftershocks and their relationship with the June 23rd, 2013 aftershock and the September 5th, 2012, mainshock.

  9. Spatial Data Exploring by Satellite Image Distributed Processing

    NASA Astrophysics Data System (ADS)

    Mihon, V. D.; Colceriu, V.; Bektas, F.; Allenbach, K.; Gvilava, M.; Gorgan, D.

    2012-04-01

    Our society needs and environmental predictions encourage the applications development, oriented on supervising and analyzing different Earth Science related phenomena. Satellite images could be explored for discovering information concerning land cover, hydrology, air quality, and water and soil pollution. Spatial and environment related data could be acquired by imagery classification consisting of data mining throughout the multispectral bands. The process takes in account a large set of variables such as satellite image types (e.g. MODIS, Landsat), particular geographic area, soil composition, vegetation cover, and generally the context (e.g. clouds, snow, and season). All these specific and variable conditions require flexible tools and applications to support an optimal search for the appropriate solutions, and high power computation resources. The research concerns with experiments on solutions of using the flexible and visual descriptions of the satellite image processing over distributed infrastructures (e.g. Grid, Cloud, and GPU clusters). This presentation highlights the Grid based implementation of the GreenLand application. The GreenLand application development is based on simple, but powerful, notions of mathematical operators and workflows that are used in distributed and parallel executions over the Grid infrastructure. Currently it is used in three major case studies concerning with Istanbul geographical area, Rioni River in Georgia, and Black Sea catchment region. The GreenLand application offers a friendly user interface for viewing and editing workflows and operators. The description involves the basic operators provided by GRASS [1] library as well as many other image related operators supported by the ESIP platform [2]. The processing workflows are represented as directed graphs giving the user a fast and easy way to describe complex parallel algorithms, without having any prior knowledge of any programming language or application commands. Also this Web application does not require any kind of install for what the house-hold user is concerned. It is a remote application which may be accessed over the Internet. Currently the GreenLand application is available through the BSC-OS Portal provided by the enviroGRIDS FP7 project [3]. This presentation aims to highlight the challenges and issues of flexible description of the Grid based processing of satellite images, interoperability with other software platforms available in the portal, as well as the particular requirements of the Black Sea related use cases.

  10. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  11. 40 CFR 429.21 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... all mechanical barking installations: There shall be no discharge of process wastewater pollutants... hydraulic barking installations: Subpart A Pollutant or pollutant property BPT effluent limitations Maximum...

  12. 40 CFR 429.21 - Effluent limitations representing the degree of effluent reduction attainable by the application...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... all mechanical barking installations: There shall be no discharge of process wastewater pollutants... hydraulic barking installations: Subpart A Pollutant or pollutant property BPT effluent limitations Maximum...

  13. 40 CFR 63.7188 - What are my monitoring installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Semiconductor Manufacturing Compliance Requirements § 63.7188 What are my monitoring installation, operation... emissions of your semiconductor process vent through a closed vent system to a control device, you must...

  14. 40 CFR 63.7188 - What are my monitoring installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Semiconductor Manufacturing Compliance Requirements § 63.7188 What are my monitoring installation, operation... emissions of your semiconductor process vent through a closed vent system to a control device, you must...

  15. Synchronization Of Parallel Discrete Event Simulations

    NASA Technical Reports Server (NTRS)

    Steinman, Jeffrey S.

    1992-01-01

    Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.

  16. Knowledge representation into Ada parallel processing

    NASA Technical Reports Server (NTRS)

    Masotto, Tom; Babikyan, Carol; Harper, Richard

    1990-01-01

    The Knowledge Representation into Ada Parallel Processing project is a joint NASA and Air Force funded project to demonstrate the execution of intelligent systems in Ada on the Charles Stark Draper Laboratory fault-tolerant parallel processor (FTPP). Two applications were demonstrated - a portion of the adaptive tactical navigator and a real time controller. Both systems are implemented as Activation Framework Objects on the Activation Framework intelligent scheduling mechanism developed by Worcester Polytechnic Institute. The implementations, results of performance analyses showing speedup due to parallelism and initial efficiency improvements are detailed and further areas for performance improvements are suggested.

  17. Military Construction: Process and Outcomes

    DTIC Science & Technology

    2016-12-14

    the Army’s Assistant Chief of Staff for Installation Management (ACSIM), the service’s senior officer responsible for setting installations-related...with the governor as its commander in chief and the Adjutant General (TAG) as its senior military officer .11 Each National Guard is a joint organization...encompasses several steps:  determination of need by the local installation commander and engineering office ,  vetting and prioritization of

  18. Floating Chip Mounting System Driven by Repulsive Force of Permanent Magnets for Multiple On-Site SPR Immunoassay Measurements

    PubMed Central

    Horiuchi, Tsutomu; Tobita, Tatsuya; Miura, Toru; Iwasaki, Yuzuru; Seyama, Michiko; Inoue, Suzuyo; Takahashi, Jun-ichi; Haga, Tsuneyuki; Tamechika, Emi

    2012-01-01

    We have developed a measurement chip installation/removal mechanism for a surface plasmon resonance (SPR) immunoassay analysis instrument designed for frequent testing, which requires a rapid and easy technique for changing chips. The key components of the mechanism are refractive index matching gel coated on the rear of the SPR chip and a float that presses the chip down. The refractive index matching gel made it possible to optically couple the chip and the prism of the SPR instrument easily via elastic deformation with no air bubbles. The float has an autonomous attitude control function that keeps the chip parallel in relation to the SPR instrument by employing the repulsive force of permanent magnets between the float and a float guide located in the SPR instrument. This function is realized by balancing the upward elastic force of the gel and the downward force of the float, which experiences a leveling force from the float guide. This system makes it possible to start an SPR measurement immediately after chip installation and to remove the chip immediately after the measurement with a simple and easy method that does not require any fine adjustment. Our sensor chip, which we installed using this mounting system, successfully performed an immunoassay measurement on a model antigen (spiked human-IgG) in a model real sample (non-homogenized milk) that included many kinds of interfering foreign substances without any sample pre-treatment. The ease of the chip installation/removal operation and simple measurement procedure are suitable for frequent on-site agricultural, environmental and medical testing. PMID:23202030

  19. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    PubMed

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  20. Surgical bedside master console for neurosurgical robotic system.

    PubMed

    Arata, Jumpei; Kenmotsu, Hajime; Takagi, Motoki; Hori, Tatsuya; Miyagi, Takahiro; Fujimoto, Hideo; Kajita, Yasukazu; Hayashi, Yuichiro; Chinzei, Kiyoyuki; Hashizume, Makoto

    2013-01-01

    We are currently developing a neurosurgical robotic system that facilitates access to residual tumors and improves brain tumor removal surgical outcomes. The system combines conventional and robotic surgery allowing for a quick conversion between the procedures. This concept requires a new master console that can be positioned at the surgical bedside and be sterilized. The master console was developed using new technologies, such as a parallel mechanism and pneumatic sensors. The parallel mechanism is a purely passive 5-DOF (degrees of freedom) joystick based on the author's haptic research. The parallel mechanism enables motion input of conventional brain tumor removal surgery with a compact, intuitive interface that can be used in a conventional surgical environment. In addition, the pneumatic sensors implemented on the mechanism provide an intuitive interface and electrically isolate the tool parts from the mechanism so they can be easily sterilized. The 5-DOF parallel mechanism is compact (17 cm width, 19cm depth, and 15cm height), provides a 505,050 mm and 90° workspace and is highly backdrivable (0.27N of resistance force representing the surgical motion). The evaluation tests revealed that the pneumatic sensors can properly measure the suction strength, grasping force, and hand contact. In addition, an installability test showed that the master console can be used in a conventional surgical environment. The proposed master console design was shown to be feasible for operative neurosurgery based on comprehensive testing. This master console is currently being tested for master-slave control with a surgical robotic system.

  1. Constituent order and semantic parallelism in online comprehension: eye-tracking evidence from German.

    PubMed

    Knoeferle, Pia; Crocker, Matthew W

    2009-12-01

    Reading times for the second conjunct of and-coordinated clauses are faster when the second conjunct parallels the first conjunct in its syntactic or semantic (animacy) structure than when its structure differs (Frazier, Munn, & Clifton, 2000; Frazier, Taft, Roeper, & Clifton, 1984). What remains unclear, however, is the time course of parallelism effects, their scope, and the kinds of linguistic information to which they are sensitive. Findings from the first two eye-tracking experiments revealed incremental constituent order parallelism across the board-both during structural disambiguation (Experiment 1) and in sentences with unambiguously case-marked constituent order (Experiment 2), as well as for both marked and unmarked constituent orders (Experiments 1 and 2). Findings from Experiment 3 revealed effects of both constituent order and subtle semantic (noun phrase similarity) parallelism. Together our findings provide evidence for an across-the-board account of parallelism for processing and-coordinated clauses, in which both constituent order and semantic aspects of representations contribute towards incremental parallelism effects. We discuss our findings in the context of existing findings on parallelism and priming, as well as mechanisms of sentence processing.

  2. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  3. Optimized Hypervisor Scheduler for Parallel Discrete Event Simulations on Virtual Machine Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S

    2013-01-01

    With the advent of virtual machine (VM)-based platforms for parallel computing, it is now possible to execute parallel discrete event simulations (PDES) over multiple virtual machines, in contrast to executing in native mode directly over hardware as is traditionally done over the past decades. While mature VM-based parallel systems now offer new, compelling benefits such as serviceability, dynamic reconfigurability and overall cost effectiveness, the runtime performance of parallel applications can be significantly affected. In particular, most VM-based platforms are optimized for general workloads, but PDES execution exhibits unique dynamics significantly different from other workloads. Here we first present results frommore » experiments that highlight the gross deterioration of the runtime performance of VM-based PDES simulations when executed using traditional VM schedulers, quantitatively showing the bad scaling properties of the scheduler as the number of VMs is increased. The mismatch is fundamental in nature in the sense that any fairness-based VM scheduler implementation would exhibit this mismatch with PDES runs. We also present a new scheduler optimized specifically for PDES applications, and describe its design and implementation. Experimental results obtained from running PDES benchmarks (PHOLD and vehicular traffic simulations) over VMs show over an order of magnitude improvement in the run time of the PDES-optimized scheduler relative to the regular VM scheduler, with over 20 reduction in run time of simulations using up to 64 VMs. The observations and results are timely in the context of emerging systems such as cloud platforms and VM-based high performance computing installations, highlighting to the community the need for PDES-specific support, and the feasibility of significantly reducing the runtime overhead for scalable PDES on VM platforms.« less

  4. Mathematical Abstraction: Constructing Concept of Parallel Coordinates

    NASA Astrophysics Data System (ADS)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2017-09-01

    Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.

  5. Installing and Executing Information Object Analysis, Intent, Dissemination, and Enhancement (IOAIDE) and Its Dependencies

    DTIC Science & Technology

    2017-02-01

    Image Processing Web Server Administration ...........................17 Fig. 18 Microsoft ASP.NET MVC 4 installation...algorithms are made into client applications that can be accessed from an image processing web service2 developed following Representational State...Transfer (REST) standards by a mobile app, laptop PC, and other devices. Similarly, weather tweets can be accessed via the Weather Digest Web Service

  6. Tunable color parallel tandem organic light emitting devices with carbon nanotube and metallic sheet interlayers

    NASA Astrophysics Data System (ADS)

    Oliva, Jorge; Papadimitratos, Alexios; Desirena, Haggeo; De la Rosa, Elder; Zakhidov, Anvar A.

    2015-11-01

    Parallel tandem organic light emitting devices (OLEDs) were fabricated with transparent multiwall carbon nanotube sheets (MWCNT) and thin metal films (Al, Ag) as interlayers. In parallel monolithic tandem architecture, the MWCNT (or metallic films) interlayers are an active electrode which injects similar charges into subunits. In the case of parallel tandems with common anode (C.A.) of this study, holes are injected into top and bottom subunits from the common interlayer electrode; whereas in the configuration of common cathode (C.C.), electrons are injected into the top and bottom subunits. Both subunits of the tandem can thus be monolithically connected functionally in an active structure in which each subunit can be electrically addressed separately. Our tandem OLEDs have a polymer as emitter in the bottom subunit and a small molecule emitter in the top subunit. We also compared the performance of the parallel tandem with that of in series and the additional advantages of the parallel architecture over the in-series were: tunable chromaticity, lower voltage operation, and higher brightness. Finally, we demonstrate that processing of the MWCNT sheets as a common anode in parallel tandems is an easy and low cost process, since their integration as electrodes in OLEDs is achieved by simple dry lamination process.

  7. Relationship between mathematical abstraction in learning parallel coordinates concept and performance in learning analytic geometry of pre-service mathematics teachers: an investigation

    NASA Astrophysics Data System (ADS)

    Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.

    2018-05-01

    As one of the non-conventional mathematics concepts, Parallel Coordinates is potential to be learned by pre-service mathematics teachers in order to give them experiences in constructing richer schemes and doing abstraction process. Unfortunately, the study related to this issue is still limited. This study wants to answer a research question “to what extent the abstraction process of pre-service mathematics teachers in learning concept of Parallel Coordinates could indicate their performance in learning Analytic Geometry”. This is a case study that part of a larger study in examining mathematical abstraction of pre-service mathematics teachers in learning non-conventional mathematics concept. Descriptive statistics method is used in this study to analyze the scores from three different tests: Cartesian Coordinate, Parallel Coordinates, and Analytic Geometry. The participants in this study consist of 45 pre-service mathematics teachers. The result shows that there is a linear association between the score on Cartesian Coordinate and Parallel Coordinates. There also found that the higher levels of the abstraction process in learning Parallel Coordinates are linearly associated with higher student achievement in Analytic Geometry. The result of this study shows that the concept of Parallel Coordinates has a significant role for pre-service mathematics teachers in learning Analytic Geometry.

  8. Smart home in a box: usability study for a large scale self-installation of smart home technologies.

    PubMed

    Hu, Yang; Tilke, Dominique; Adams, Taylor; Crandall, Aaron S; Cook, Diane J; Schmitter-Edgecombe, Maureen

    2016-07-01

    This study evaluates the ability of users to self-install a smart home in a box (SHiB) intended for use by a senior population. SHiB is a ubiquitous system, developed by the Washington State University Center for Advanced Studies in Adaptive Systems (CASAS). Participants involved in this study are from the greater Palouse region of Washington State, and there are 13 participants in the study with an average age of 69.23. The SHiB package, which included several different types of components to collect and transmit sensor data, was given to participants to self-install. After installation of the SHiB, the participants were visited by researchers for a check of the installation. The researchers evaluated how well the sensors were installed and asked the resident questions about the installation process to help improve the SHiB design. The results indicate strengths and weaknesses of the SHiB design. Indoor motion tracking sensors are installed with high success rate, low installation success rate was found for door sensors and setting up the Internet server.

  9. Smart home in a box: usability study for a large scale self-installation of smart home technologies

    PubMed Central

    Hu, Yang; Tilke, Dominique; Adams, Taylor; Crandall, Aaron S.; Schmitter-Edgecombe, Maureen

    2017-01-01

    This study evaluates the ability of users to self-install a smart home in a box (SHiB) intended for use by a senior population. SHiB is a ubiquitous system, developed by the Washington State University Center for Advanced Studies in Adaptive Systems (CASAS). Participants involved in this study are from the greater Palouse region of Washington State, and there are 13 participants in the study with an average age of 69.23. The SHiB package, which included several different types of components to collect and transmit sensor data, was given to participants to self-install. After installation of the SHiB, the participants were visited by researchers for a check of the installation. The researchers evaluated how well the sensors were installed and asked the resident questions about the installation process to help improve the SHiB design. The results indicate strengths and weaknesses of the SHiB design. Indoor motion tracking sensors are installed with high success rate, low installation success rate was found for door sensors and setting up the Internet server. PMID:28936390

  10. Cedar Project---Original goals and progress to date

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cybenko, G.; Kuck, D.; Padua, D.

    1990-11-28

    This work encompasses a broad attack on high speed parallel processing. Hardware, software, applications development, and performance evaluation and visualization as well as research topics are proposed. Our goal is to develop practical parallel processing for the 1990's.

  11. Fear Control an Danger Control: A Test of the Extended Parallel Process Model (EPPM).

    ERIC Educational Resources Information Center

    Witte, Kim

    1994-01-01

    Explores cognitive and emotional mechanisms underlying success and failure of fear appeals in context of AIDS prevention. Offers general support for Extended Parallel Process Model. Suggests that cognitions lead to fear appeal success (attitude, intention, or behavior changes) via danger control processes, whereas the emotion fear leads to fear…

  12. Processing communications events in parallel active messaging interface by awakening thread from wait state

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-22

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  13. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with themore » endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.« less

  14. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  15. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  16. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  17. Developing software to use parallel processing effectively. Final report, June-December 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Center, J.

    1988-10-01

    This report describes the difficulties involved in writing efficient parallel programs and describes the hardware and software support currently available for generating software that utilizes processing effectively. Historically, the processing rate of single-processor computers has increased by one order of magnitude every five years. However, this pace is slowing since electronic circuitry is coming up against physical barriers. Unfortunately, the complexity of engineering and research problems continues to require ever more processing power (far in excess of the maximum estimated 3 Gflops achievable by single-processor computers). For this reason, parallel-processing architectures are receiving considerable interest, since they offer high performancemore » more cheaply than a single-processor supercomputer, such as the Cray.« less

  18. High-Performance Data Analysis Tools for Sun-Earth Connection Missions

    NASA Technical Reports Server (NTRS)

    Messmer, Peter

    2011-01-01

    The data analysis tool of choice for many Sun-Earth Connection missions is the Interactive Data Language (IDL) by ITT VIS. The increasing amount of data produced by these missions and the increasing complexity of image processing algorithms requires access to higher computing power. Parallel computing is a cost-effective way to increase the speed of computation, but algorithms oftentimes have to be modified to take advantage of parallel systems. Enhancing IDL to work on clusters gives scientists access to increased performance in a familiar programming environment. The goal of this project was to enable IDL applications to benefit from both computing clusters as well as graphics processing units (GPUs) for accelerating data analysis tasks. The tool suite developed in this project enables scientists now to solve demanding data analysis problems in IDL that previously required specialized software, and it allows them to be solved orders of magnitude faster than on conventional PCs. The tool suite consists of three components: (1) TaskDL, a software tool that simplifies the creation and management of task farms, collections of tasks that can be processed independently and require only small amounts of data communication; (2) mpiDL, a tool that allows IDL developers to use the Message Passing Interface (MPI) inside IDL for problems that require large amounts of data to be exchanged among multiple processors; and (3) GPULib, a tool that simplifies the use of GPUs as mathematical coprocessors from within IDL. mpiDL is unique in its support for the full MPI standard and its support of a broad range of MPI implementations. GPULib is unique in enabling users to take advantage of an inexpensive piece of hardware, possibly already installed in their computer, and achieve orders of magnitude faster execution time for numerically complex algorithms. TaskDL enables the simple setup and management of task farms on compute clusters. The products developed in this project have the potential to interact, so one can build a cluster of PCs, each equipped with a GPU, and use mpiDL to communicate between the nodes and GPULib to accelerate the computations on each node.

  19. NEPTUNE Canada Regional Cabled Observatory: Transforming Ocean Science

    NASA Astrophysics Data System (ADS)

    Best, M.; Barnes, C.; Bornhold, B.; Johnson, F.; Phibbs, P.; Pirenne, B.

    2008-12-01

    NEPTUNE Canada is installing a regional cabled ocean observatory across the northern Juan de Fuca Plate in the northeastern Pacific. When installation of the first suite of instruments and connectivity equipment is completed in 2009, this system will provide the continuous power and bandwidth to collect integrated data on physical, chemical, geological, and biological gradients at temporal resolutions relevant to the dynamics of the earth-ocean system. The building of this facility integrates hardware, software, and people networks. Hardware progress to date includes: installation of the 800km powered fiber-optic backbone in the Fall of 2007; development of Nodes and Junction Boxes that are currently being manufactured; acquisition/development and testing of Instruments; development of mobile instrument platforms such as a) a Vertical Profiler which has completed FAT and will be delivered in the Fall of 2008 and b) a Crawler (University of Bremmen) field tested in June 2008 for investigation of exposed hydrate deposits. An integrated test platform is being deployed on the operational VENUS observatory in September 2008, which includes a module developed by Ifremer. In parallel, software and hardware systems are built to acquire, archive, and deliver the continuous real-time data - already in operation for VENUS. A web environment to combine this data access with analysis and visualization, collaborative tools, interoperability, and instrument control is under construction. Finally, a network of scientists and technicians are contributing to the process in every phase. Initial experiments were planned through a series of workshops and international proposal competitions. At inshore Folger Passage, Barkley Sound, understanding controls on biological productivity will help evaluate the effects that marine processes have on fish and marine mammals. Experiments around Barkley Canyon will allow quantification of changes in biological and chemical activity associated with nutrient and cross-shelf sediment transport around the shelf/slope break and through the canyon to the deep sea. There and north along the mid-continental slope, exposed and shallowly buried gas hydrates allow monitoring of changes in their distribution, structure, and venting, particularly related to earthquakes, slope failures and regional plate motions. Circulation obviation retrofit kits (CORKs) at mid-plate ODP 1026-7 will monitor in realtime changes in crustal temperature and pressure, particularly as they relate to events such as earthquakes, hydrothermal convection or regional plate strain. At Endeavour Ridge, complex interactions among volcanic, tectonic, hydrothermal and biological processes will be quantified at the western edge of the Juan de Fuca plate. Across the network, high resolution seismic information will elucidate tectonic processes such as earthquakes, and a tsunami system will allow determination of open ocean tsunami amplitude, propagation direction, and speed. The infrastructure has further capacity to allow experiments to expand from this initial suite. Further information and opportunities can be found at http://www.neptunecanada.ca NEPTUNE Canada will transform our understanding of biological, chemical, physical, and geological processes across an entire tectonic plate from the shelf to the deep sea (17-2700m). Real-time continuous monitoring and archiving allows scientists to capture the temporal nature, characteristics, and linkages of these natural processes in a way never before possible.

  20. Regional-scale calculation of the LS factor using parallel processing

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  1. Compact holographic optical neural network system for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  2. Parallel VLSI architecture emulation and the organization of APSA/MPP

    NASA Technical Reports Server (NTRS)

    Odonnell, John T.

    1987-01-01

    The Applicative Programming System Architecture (APSA) combines an applicative language interpreter with a novel parallel computer architecture that is well suited for Very Large Scale Integration (VLSI) implementation. The Massively Parallel Processor (MPP) can simulate VLSI circuits by allocating one processing element in its square array to an area on a square VLSI chip. As long as there are not too many long data paths, the MPP can simulate a VLSI clock cycle very rapidly. The APSA circuit contains a binary tree with a few long paths and many short ones. A skewed H-tree layout allows every processing element to simulate a leaf cell and up to four tree nodes, with no loss in parallelism. Emulation of a key APSA algorithm on the MPP resulted in performance 16,000 times faster than a Vax. This speed will make it possible for the APSA language interpreter to run fast enough to support research in parallel list processing algorithms.

  3. Metrology: Measurement Assurance Program Guidelines

    NASA Technical Reports Server (NTRS)

    Eicke, W. G.; Riley, J. P.; Riley, K. J.

    1995-01-01

    The 5300.4 series of NASA Handbooks for Reliability and Quality Assurance Programs have provisions for the establishment and utilization of a documented metrology system to control measurement processes and to provide objective evidence of quality conformance. The intent of these provisions is to assure consistency and conformance to specifications and tolerances of equipment, systems, materials, and processes procured and/or used by NASA, its international partners, contractors, subcontractors, and suppliers. This Measurement Assurance Program (MAP) guideline has the specific objectives to: (1) ensure the quality of measurements made within NASA programs; (2) establish realistic measurement process uncertainties; (3) maintain continuous control over the measurement processes; and (4) ensure measurement compatibility among NASA facilities. The publication addresses MAP methods as applied within and among NASA installations and serves as a guide to: control measurement processes at the local level (one facility); conduct measurement assurance programs in which a number of field installations are joint participants; and conduct measurement integrity (round robin) experiments in which a number of field installations participate to assess the overall quality of particular measurement processes at a point in time.

  4. Noise radiation directivity from a wind-tunnel inlet with inlet vanes and duct wall linings

    NASA Technical Reports Server (NTRS)

    Soderman, P. T.; Phillips, J. D.

    1986-01-01

    The acoustic radiation patterns from a 1/15th scale model of the Ames 80- by 120-Ft Wind Tunnel test section and inlet have been measured with a noise source installed in the test section. Data were acquired without airflow in the duct. Sound-absorbent inlet vanes oriented parallel to each other, or splayed with a variable incidence relative to the duct long axis, were evaluated along with duct wall linings. Results show that splayed vans tend to spread the sound to greater angles than those measured with the open inlet. Parallel vanes narrowed the high-frequency radiation pattern. Duct wall linings had a strong effect on acoustic directivity by attenuating wall reflections. Vane insertion loss was measured. Directivity results are compared with existing data from square ducts. Two prediction methods for duct radiation directivity are described: one is an empirical method based on the test data, and the other is a analytical method based on ray acoustics.

  5. Dynamic Simulation on the Installation Process of HGIS in Transformer Substation

    NASA Astrophysics Data System (ADS)

    Lin, Tao; Li, Shaohua; Wang, Hu; Che, Deyong; Qi, Guangcai; Yao, Jianfeng; Zhang, Qingzhe

    The technological requirements of Hypid Gas Insulated Switchgear (HGIS) installation in transformer substation is high and the control points of quality is excessive. Most of the engineers and technicians in the construction enterprises are not familiar with equipments of HGIS. In order to solve these problem, equipments of HGIS is modeled on the computer by SolidWorks software. Installation process of civil foundation and closed-type equipments is optimized dynamically with virtual assemble technology. Announcements and application work are composited into animation file. Skills of modeling and simulation is tidied classify as well. The result of the visual dynamic simulation can instruct the actual construction process of HGIS to a certain degree and can promote reasonable construction planning and management. It can also improve the method and quality of staff training for electric power construction enterprises.

  6. Synthesis and Late-Stage Functionalization of Complex Molecules through C–H Fluorination and Nucleophilic Aromatic Substitution

    PubMed Central

    2015-01-01

    We report the late-stage functionalization of multisubstituted pyridines and diazines at the position α to nitrogen. By this process, a series of functional groups and substituents bound to the ring through nitrogen, oxygen, sulfur, or carbon are installed. This functionalization is accomplished by a combination of fluorination and nucleophilic aromatic substitution of the installed fluoride. A diverse array of functionalities can be installed because of the mild reaction conditions revealed for nucleophilic aromatic substitutions (SNAr) of the 2-fluoroheteroarenes. An evaluation of the rates for substitution versus the rates for competitive processes provides a framework for planning this functionalization sequence. This process is illustrated by the modification of a series of medicinally important compounds, as well as the increase in efficiency of synthesis of several existing pharmaceuticals. PMID:24918484

  7. Parallel evolution of image processing tools for multispectral imagery

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-11-01

    We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.

  8. Application of integration algorithms in a parallel processing environment for the simulation of jet engines

    NASA Technical Reports Server (NTRS)

    Krosel, S. M.; Milner, E. J.

    1982-01-01

    The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.

  9. Managing internode data communications for an uninitialized process in a parallel computer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior tomore » initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.« less

  10. Managing internode data communications for an uninitialized process in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  11. Parallel processing in the honeybee olfactory pathway: structure, function, and evolution.

    PubMed

    Rössler, Wolfgang; Brill, Martin F

    2013-11-01

    Animals face highly complex and dynamic olfactory stimuli in their natural environments, which require fast and reliable olfactory processing. Parallel processing is a common principle of sensory systems supporting this task, for example in visual and auditory systems, but its role in olfaction remained unclear. Studies in the honeybee focused on a dual olfactory pathway. Two sets of projection neurons connect glomeruli in two antennal-lobe hemilobes via lateral and medial tracts in opposite sequence with the mushroom bodies and lateral horn. Comparative studies suggest that this dual-tract circuit represents a unique adaptation in Hymenoptera. Imaging studies indicate that glomeruli in both hemilobes receive redundant sensory input. Recent simultaneous multi-unit recordings from projection neurons of both tracts revealed widely overlapping response profiles strongly indicating parallel olfactory processing. Whereas lateral-tract neurons respond fast with broad (generalistic) profiles, medial-tract neurons are odorant specific and respond slower. In analogy to "what-" and "where" subsystems in visual pathways, this suggests two parallel olfactory subsystems providing "what-" (quality) and "when" (temporal) information. Temporal response properties may support across-tract coincidence coding in higher centers. Parallel olfactory processing likely enhances perception of complex odorant mixtures to decode the diverse and dynamic olfactory world of a social insect.

  12. The 2nd Symposium on the Frontiers of Massively Parallel Computations

    NASA Technical Reports Server (NTRS)

    Mills, Ronnie (Editor)

    1988-01-01

    Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.

  13. Installation of warm mix asphalt projects in Virginia.

    DOT National Transportation Integrated Search

    2007-01-01

    Several processes have been developed to reduce the mixing and compaction temperatures of hot mix asphalt (HMA) without sacrificing the quality of the resulting pavement. The purpose of this study was to evaluate the installation of warm mix asphalt ...

  14. A new beam emission polarimetry diagnostic for measuring the magnetic field line angle at the plasma edge of ASDEX Upgrade.

    PubMed

    Viezzer, E; Dux, R; Dunne, M G

    2016-11-01

    A new edge beam emission polarimetry diagnostic dedicated to the measurement of the magnetic field line angle has been installed on the ASDEX Upgrade tokamak. The new diagnostic relies on the motional Stark effect and is based on the simultaneous measurement of the polarization direction of the linearly polarized π (parallel to the electric field) and σ (perpendicular to the electric field) lines of the Balmer line D α . The technical properties of the system are described. The calibration procedures are discussed and first measurements are presented.

  15. A new beam emission polarimetry diagnostic for measuring the magnetic field line angle at the plasma edge of ASDEX Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viezzer, E., E-mail: eleonora.viezzer@ipp.mpg.de, E-mail: eviezzer@us.es; Department of Atomic, Molecular, and Nuclear Physics, University of Seville, Avda. Reina Mercedes, 41012 Seville; Dux, R.

    2016-11-15

    A new edge beam emission polarimetry diagnostic dedicated to the measurement of the magnetic field line angle has been installed on the ASDEX Upgrade tokamak. The new diagnostic relies on the motional Stark effect and is based on the simultaneous measurement of the polarization direction of the linearly polarized π (parallel to the electric field) and σ (perpendicular to the electric field) lines of the Balmer line D{sub α}. The technical properties of the system are described. The calibration procedures are discussed and first measurements are presented.

  16. Analysis, Verification, and Application of Equations and Procedures for Design of Exhaust-pipe Shrouds

    NASA Technical Reports Server (NTRS)

    Ellerbrock, Herman H.; Wcislo, Chester R.; Dexter, Howard E.

    1947-01-01

    Investigations were made to develop a simplified method for designing exhaust-pipe shrouds to provide desired or maximum cooling of exhaust installations. Analysis of heat exchange and pressure drop of an adequate exhaust-pipe shroud system requires equations for predicting design temperatures and pressure drop on cooling air side of system. Present experiments derive such equations for usual straight annular exhaust-pipe shroud systems for both parallel flow and counter flow. Equations and methods presented are believed to be applicable under certain conditions to the design of shrouds for tail pipes of jet engines.

  17. Informatics for RNA Sequencing: A Web Resource for Analysis on the Cloud

    PubMed Central

    Griffith, Malachi; Walker, Jason R.; Spies, Nicholas C.; Ainscough, Benjamin J.; Griffith, Obi L.

    2015-01-01

    Massively parallel RNA sequencing (RNA-seq) has rapidly become the assay of choice for interrogating RNA transcript abundance and diversity. This article provides a detailed introduction to fundamental RNA-seq molecular biology and informatics concepts. We make available open-access RNA-seq tutorials that cover cloud computing, tool installation, relevant file formats, reference genomes, transcriptome annotations, quality-control strategies, expression, differential expression, and alternative splicing analysis methods. These tutorials and additional training resources are accompanied by complete analysis pipelines and test datasets made available without encumbrance at www.rnaseq.wiki. PMID:26248053

  18. Electricity Generation Baseline Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, Jeffrey; Marcy, Cara; McCall, James

    This report was developed by a team of national laboratory analysts over the period October 2015 to May 2016 and is part of a series of studies that provide background material to inform development of the second installment of the Quadrennial Energy Review (QER 1.2). The report focuses specifically on U.S. power sector generation. The report limits itself to the generation sector and does not address in detail parallel issues in electricity end use, transmission and distribution, markets and policy design, and other important segments. The report lists 15 key findings about energy system needs of the future.

  19. Rectifier cabinet static breaker

    DOEpatents

    Costantino, Jr, Roger A.; Gliebe, Ronald J.

    1992-09-01

    A rectifier cabinet static breaker replaces a blocking diode pair with an SCR and the installation of a power transistor in parallel with the latch contactor to commutate the SCR to the off state. The SCR serves as a static breaker with fast turnoff capability providing an alternative way of achieving reactor scram in addition to performing the function of the replaced blocking diodes. The control circuitry for the rectifier cabinet static breaker includes on-line test capability and an LED indicator light to denote successful test completion. Current limit circuitry provides high-speed protection in the event of overload.

  20. Aerial ultrasound source with a circular vibrating plate attached to a rigid circumferential wall

    NASA Astrophysics Data System (ADS)

    Kuratomi, Ryo; Asami, Takuya; Miura, Hikaru

    2018-07-01

    We fabricate a transverse vibrating plate attached to a rigid wall integrated at the circumference of a circular vibrating plate that allows a strong sound wave field to be formed in the area encoded by the vibrating plate and rigid wall by installing a wall such as a reflective plate on the rigid wall. The design method for the circular vibrating plate attached to a rigid circumferential wall is investigated. A method of forming a strong standing wave field in an enclosed area constructed with a vibrating plate, cylindrical reflective plate, and parallel reflective plate is developed.

  1. Special purpose computer system with highly parallel pipelines for flow visualization using holography technology

    NASA Astrophysics Data System (ADS)

    Masuda, Nobuyuki; Sugie, Takashige; Ito, Tomoyoshi; Tanaka, Shinjiro; Hamada, Yu; Satake, Shin-ichi; Kunugi, Tomoaki; Sato, Kazuho

    2010-12-01

    We have designed a PC cluster system with special purpose computer boards for visualization of fluid flow using digital holographic particle tracking velocimetry (DHPTV). In this board, there is a Field Programmable Gate Array (FPGA) chip in which is installed a pipeline for calculating the intensity of an object from a hologram by fast Fourier transform (FFT). This cluster system can create 1024 reconstructed images from a 1024×1024-grid hologram in 0.77 s. It is expected that this system will contribute to the analysis of fluid flow using DHPTV.

  2. Electron temperature and heat load measurements in the COMPASS divertor using the new system of probes

    NASA Astrophysics Data System (ADS)

    Adamek, J.; Seidl, J.; Horacek, J.; Komm, M.; Eich, T.; Panek, R.; Cavalier, J.; Devitre, A.; Peterka, M.; Vondracek, P.; Stöckel, J.; Sestak, D.; Grover, O.; Bilkova, P.; Böhm, P.; Varju, J.; Havranek, A.; Weinzettl, V.; Lovell, J.; Dimitrova, M.; Mitosinkova, K.; Dejarnac, R.; Hron, M.; The COMPASS Team; The EUROfusion MST1 Team

    2017-11-01

    A new system of probes was recently installed in the divertor of tokamak COMPASS in order to investigate the ELM energy density with high spatial and temporal resolution. The new system consists of two arrays of rooftop-shaped Langmuir probes (LPs) used to measure the floating potential or the ion saturation current density and one array of Ball-pen probes (BPPs) used to measure the plasma potential with a spatial resolution of ~3.5 mm. The combination of floating BPPs and LPs yields the electron temperature with microsecond temporal resolution. We report on the design of the new divertor probe arrays and first results of electron temperature profile measurements in ELMy H-mode and L-mode. We also present comparative measurements of the parallel heat flux using the new probe arrays and fast infrared termography (IR) data during L-mode with excellent agreement between both techniques using a heat power transmission coefficient γ  =  7. The ELM energy density {{\\varepsilon }\\parallel } was measured during a set of NBI assisted ELMy H-mode discharges. The peak values of {{\\varepsilon }\\parallel } were compared with those predicted by model and with experimental data from JET, AUG and MAST with a good agreement.

  3. Novel approach for image skeleton and distance transformation parallel algorithms

    NASA Astrophysics Data System (ADS)

    Qing, Kent P.; Means, Robert W.

    1994-05-01

    Image Understanding is more important in medical imaging than ever, particularly where real-time automatic inspection, screening and classification systems are installed. Skeleton and distance transformations are among the common operations that extract useful information from binary images and aid in Image Understanding. The distance transformation describes the objects in an image by labeling every pixel in each object with the distance to its nearest boundary. The skeleton algorithm starts from the distance transformation and finds the set of pixels that have a locally maximum label. The distance algorithm has to scan the entire image several times depending on the object width. For each pixel, the algorithm must access the neighboring pixels and find the maximum distance from the nearest boundary. It is a computational and memory access intensive procedure. In this paper, we propose a novel parallel approach to the distance transform and skeleton algorithms using the latest VLSI high- speed convolutional chips such as HNC's ViP. The algorithm speed is dependent on the object's width and takes (k + [(k-1)/3]) * 7 milliseconds for a 512 X 512 image with k being the maximum distance of the largest object. All objects in the image will be skeletonized at the same time in parallel.

  4. Big Data GPU-Driven Parallel Processing Spatial and Spatio-Temporal Clustering Algorithms

    NASA Astrophysics Data System (ADS)

    Konstantaras, Antonios; Skounakis, Emmanouil; Kilty, James-Alexander; Frantzeskakis, Theofanis; Maravelakis, Emmanuel

    2016-04-01

    Advances in graphics processing units' technology towards encompassing parallel architectures [1], comprised of thousands of cores and multiples of parallel threads, provide the foundation in terms of hardware for the rapid processing of various parallel applications regarding seismic big data analysis. Seismic data are normally stored as collections of vectors in massive matrices, growing rapidly in size as wider areas are covered, denser recording networks are being established and decades of data are being compiled together [2]. Yet, many processes regarding seismic data analysis are performed on each seismic event independently or as distinct tiles [3] of specific grouped seismic events within a much larger data set. Such processes, independent of one another can be performed in parallel narrowing down processing times drastically [1,3]. This research work presents the development and implementation of three parallel processing algorithms using Cuda C [4] for the investigation of potentially distinct seismic regions [5,6] present in the vicinity of the southern Hellenic seismic arc. The algorithms, programmed and executed in parallel comparatively, are the: fuzzy k-means clustering with expert knowledge [7] in assigning overall clusters' number; density-based clustering [8]; and a selves-developed spatio-temporal clustering algorithm encompassing expert [9] and empirical knowledge [10] for the specific area under investigation. Indexing terms: GPU parallel programming, Cuda C, heterogeneous processing, distinct seismic regions, parallel clustering algorithms, spatio-temporal clustering References [1] Kirk, D. and Hwu, W.: 'Programming massively parallel processors - A hands-on approach', 2nd Edition, Morgan Kaufman Publisher, 2013 [2] Konstantaras, A., Valianatos, F., Varley, M.R. and Makris, J.P.: 'Soft-Computing Modelling of Seismicity in the Southern Hellenic Arc', Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [3] Papadakis, S. and Diamantaras, K.: 'Programming and architecture of parallel processing systems', 1st Edition, Eds. Kleidarithmos, 2011 [4] NVIDIA.: 'NVidia CUDA C Programming Guide', version 5.0, NVidia (reference book) [5] Konstantaras, A.: 'Classification of Distinct Seismic Regions and Regional Temporal Modelling of Seismicity in the Vicinity of the Hellenic Seismic Arc', IEEE Selected Topics in Applied Earth Observations and Remote Sensing, vol. 6 (4), pp. 1857-1863, 2013 [6] Konstantaras, A. Varley, M.R.,. Valianatos, F., Collins, G. and Holifield, P.: 'Recognition of electric earthquake precursors using neuro-fuzzy models: methodology and simulation results', Proc. IASTED International Conference on Signal Processing Pattern Recognition and Applications (SPPRA 2002), Crete, Greece, 2002, pp 303-308, 2002 [7] Konstantaras, A., Katsifarakis, E., Maravelakis, E., Skounakis, E., Kokkinos, E. and Karapidakis, E.: 'Intelligent Spatial-Clustering of Seismicity in the Vicinity of the Hellenic Seismic Arc', Earth Science Research, vol. 1 (2), pp. 1-10, 2012 [8] Georgoulas, G., Konstantaras, A., Katsifarakis, E., Stylios, C.D., Maravelakis, E. and Vachtsevanos, G.: '"Seismic-Mass" Density-based Algorithm for Spatio-Temporal Clustering', Expert Systems with Applications, vol. 40 (10), pp. 4183-4189, 2013 [9] Konstantaras, A. J.: 'Expert knowledge-based algorithm for the dynamic discrimination of interactive natural clusters', Earth Science Informatics, 2015 (In Press, see: www.scopus.com) [10] Drakatos, G. and Latoussakis, J.: 'A catalog of aftershock sequences in Greece (1971-1997): Their spatial and temporal characteristics', Journal of Seismology, vol. 5, pp. 137-145, 2001

  5. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  6. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  7. Multiprocessor speed-up, Amdahl's Law, and the Activity Set Model of parallel program behavior

    NASA Technical Reports Server (NTRS)

    Gelenbe, Erol

    1988-01-01

    An important issue in the effective use of parallel processing is the estimation of the speed-up one may expect as a function of the number of processors used. Amdahl's Law has traditionally provided a guideline to this issue, although it appears excessively pessimistic in the light of recent experimental results. In this note, Amdahl's Law is amended by giving a greater importance to the capacity of a program to make effective use of parallel processing, but also recognizing the fact that imbalance of the workload of each processor is bound to occur. An activity set model of parallel program behavior is then introduced along with the corresponding parallelism index of a program, leading to upper and lower bounds to the speed-up.

  8. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  9. Monitoring water cycle elements using GNSS geodetic receivers at the field research station Marquardt, Germany

    NASA Astrophysics Data System (ADS)

    Simeonov, Tzvetan; Vey, Sibylle; Alshawaf, Fadwa; Dick, Galina; Guerova, Guergana; Güntner, Andreas; Hohmann, Christian; Kunwar, Ajeet; Trost, Benjamin; Wickert, Jens

    2017-04-01

    Water storage variations in the atmosphere and in soils are among the most dynamic within the Earth's water cycle. The continuous measurement of water storage in these media with a high spatial and temporal resolution is a challenging task, not yet completely solved by various observation techniques. With the development of the Global Navigation Satellite Systems (GNSS) a new approach for atmospheric water vapor estimation in the atmosphere and in parallel of soil moisture in the vicinity of GNSS ground stations was established in the recent years with several key advantages compared to traditional techniques. Regional and global GNSS networks are nowadays operationally used to provide the Integrated Water Vapor (IWV) information with high temporal resolution above the individual stations. Corresponding data products are used to improve the day-by-day weather prediction of leading forecast centers. Selected stations from these networks can be used to additionally derive the soil moisture in the vicinity of the receivers. Such parallel measurement of IWV and soil moisture using a single measuring device provides a unique possibility to analyze water fluxes between the atmosphere and the land surface. We installed an advanced experimental GNSS setup for hydrology at the field research station of the Leibniz Institute for Agricultural Engineering and Bioeconomy in Marquardt, around 30km West of Berlin, Germany. The setup includes several GNSS receivers, various Time Domain Reflectometry (TDR) sensors at different depths for soil moisture measurement and an meteorological station. The setup was mainly installed to develop and improve GNSS based techniques for soil moisture determination and to analyze GNSS IWV and SM in parallel on a long-term perspective. We introduce initial results from more than two years of measurements. The comparison in station Marquardt shows good agreement (correlation 0.79) between the GNSS derived soil moisture and the TDR measurements. A detailed study for several periods with different GNSS settings, vegetation and soil conditions in the vicinity of the station is presented with emphasis on the behavior of GNSS derived soil moisture, compared to TDR. Case studies of intense rainfall events and lasting dry periods show the interaction between the IWV and soil moisture.

  10. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  11. AZTEC. Parallel Iterative method Software for Solving Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, S.; Shadid, J.; Tuminaro, R.

    1995-07-01

    AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less

  12. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  13. General view of the aft fuselage of the Orbiter Discovery ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    General view of the aft fuselage of the Orbiter Discovery looking forward showing Space Shuttle Main Engines (SSMEs) installed in positions one and three and an SSME on the process of being installed in position two. This photograph was taken in the Orbiter Processing Facility at the Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX

  14. PHOTOCITYTEX - A LIFE project on the air pollution treatment in European urban environments by means of photocatalytic textiles

    NASA Astrophysics Data System (ADS)

    Ródenas, Milagros; Fages, Eduardo; Fatarella, Enrico; Herrero, David; Castagnoli, Lidia; Borrás, Esther; Vera, Teresa; Gómez, Tatiana; Carreño, Javier; López, Ramón; Gimeno, Cristina; Catota, Marlon; Muñoz, Amalia

    2016-04-01

    In urban areas, air pollution from traffic is becoming a growing problem. In recent years the use of titanium dioxide (TiO2) based photocatalytic self-cleaning and de-polluting materials has been considered to remove these pollutants. TiO2 is now commercially available and used in construction material or paints for environmental purposes. Further work, however, is still required to clarify the potential impacts from wider TiO2 use. Specific test conditions are required to provide objective and accurate knowledge. Under the LIFE PHOTOCITYTEX project, the effectiveness of using TiO2-based photocatalytic nanomaterials in building textiles as a way of improving the air quality in urban areas will be assessed. Moreover, information on secondary products formed during the tests will be obtained, yielding a better overall understanding of the whole process and its implications. For this purpose, a series of demonstrations are foreseen, comprising 1. lab-test and development of textile prototypes at lab scale, 2. larger scale demonstration of the use of photocatalytic textiles in the depollution of urban environments employing the EUPHORE chambers to simulate a number of environmental conditions of various European cities and 3. field demonstrations installing the photocatalytic textiles in two urban locations in Quart de Poblet, a tunnel and a school. A one-year extensive passive dosimetric campaign has already being carried out to characterize the selected urban sites before the installation of the photocatalytic textile prototypes, and a similar campaign after their installation is ongoing. Also, more comprehensive intensive active measurement campaigns have been conducted to account for winter and summer conditions. In parallel, lab-tests have already been completed to determine optimal photocatalytic formulations on textiles, followed by experiments at EUPHORE. Information on the deployment of the campaigns is given together with laboratory conclusions and first verification on the photocatalytic textile effectiveness as observed in the field campaigns and at EUPHORE. A discussion on the impact of this depolluting solution on the air quality of urban environments is given.

  15. Efficient Parallel Levenberg-Marquardt Model Fitting towards Real-Time Automated Parametric Imaging Microscopy

    PubMed Central

    Zhu, Xiang; Zhang, Dianwen

    2013-01-01

    We present a fast, accurate and robust parallel Levenberg-Marquardt minimization optimizer, GPU-LMFit, which is implemented on graphics processing unit for high performance scalable parallel model fitting processing. GPU-LMFit can provide a dramatic speed-up in massive model fitting analyses to enable real-time automated pixel-wise parametric imaging microscopy. We demonstrate the performance of GPU-LMFit for the applications in superresolution localization microscopy and fluorescence lifetime imaging microscopy. PMID:24130785

  16. FPGA-Based Filterbank Implementation for Parallel Digital Signal Processing

    NASA Technical Reports Server (NTRS)

    Berner, Stephan; DeLeon, Phillip

    1999-01-01

    One approach to parallel digital signal processing decomposes a high bandwidth signal into multiple lower bandwidth (rate) signals by an analysis bank. After processing, the subband signals are recombined into a fullband output signal by a synthesis bank. This paper describes an implementation of the analysis and synthesis banks using (Field Programmable Gate Arrays) FPGAs.

  17. Parallel Processing of the Target Language during Source Language Comprehension in Interpreting

    ERIC Educational Resources Information Center

    Dong, Yanping; Lin, Jiexuan

    2013-01-01

    Two experiments were conducted to test the hypothesis that the parallel processing of the target language (TL) during source language (SL) comprehension in interpreting may be influenced by two factors: (i) link strength from SL to TL, and (ii) the interpreter's cognitive resources supplement to TL processing during SL comprehension. The…

  18. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  19. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  20. The AIS-5000 parallel processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitt, L.A.; Wilson, S.S.

    1988-05-01

    The AIS-5000 is a commercially available massively parallel processor which has been designed to operate in an industrial environment. It has fine-grained parallelism with up to 1024 processing elements arranged in a single-instruction multiple-data (SIMD) architecture. The processing elements are arranged in a one-dimensional chain that, for computer vision applications, can be as wide as the image itself. This architecture has superior cost/performance characteristics than two-dimensional mesh-connected systems. The design of the processing elements and their interconnections as well as the software used to program the system allow a wide variety of algorithms and applications to be implemented. In thismore » paper, the overall architecture of the system is described. Various components of the system are discussed, including details of the processing elements, data I/O pathways and parallel memory organization. A virtual two-dimensional model for programming image-based algorithms for the system is presented. This model is supported by the AIS-5000 hardware and software and allows the system to be treated as a full-image-size, two-dimensional, mesh-connected parallel processor. Performance bench marks are given for certain simple and complex functions.« less

  1. Progress in Unsteady Turbopump Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Chan, William; Kwak, Dochan; Williams, Robert

    2002-01-01

    This viewgraph presentation discusses unsteady flow simulations for a turbopump intended for a reusable launch vehicle (RLV). The simulation process makes use of computational grids and parallel processing. The architecture of the parallel computers used is discussed, as is the scripting of turbopump simulations.

  2. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    NASA Astrophysics Data System (ADS)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  3. 1060-nm VCSEL-based parallel-optical modules for optical interconnects

    NASA Astrophysics Data System (ADS)

    Nishimura, N.; Nagashima, K.; Kise, T.; Rizky, A. F.; Uemura, T.; Nekado, Y.; Ishikawa, Y.; Nasu, H.

    2015-03-01

    The capability of mounting a parallel-optical module onto a PCB through solder-reflow process contributes to reduce the number of piece parts, simplify its assembly process, and minimize a foot print for both AOC and on-board applications. We introduce solder-reflow-capable parallel-optical modules employing 1060-nm InGaAs/GaAs VCSEL which leads to the advantages of realizing wider modulation bandwidth, longer transmission distance, and higher reliability. We demonstrate 4-channel parallel optical link performance operated at a bit stream of 28 Gb/s 231-1 PRBS for each channel and transmitted through a 50-μm-core MMF beyond 500 m. We also introduce a new mounting technology of paralleloptical module to realize maintaining good coupling and robust electrical connection during solder-reflow process between an optical module and a polymer-waveguide-embedded PCB.

  4. Potential climatic impacts and reliability of very large-scale wind farms

    NASA Astrophysics Data System (ADS)

    Wang, C.; Prinn, R. G.

    2010-02-01

    Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has fueled substantial interest in this renewable energy source as one of the needed technologies. For very large-scale utilization of this resource, there are however potential environmental impacts, and also problems arising from its inherent intermittency, in addition to the present need to lower unit costs. To explore some of these issues, we use a three-dimensional climate model to simulate the potential climate effects associated with installation of wind-powered generators over vast areas of land or coastal ocean. Using wind turbines to meet 10% or more of global energy demand in 2100, could cause surface warming exceeding 1 °C over land installations. In contrast, surface cooling exceeding 1 °C is computed over ocean installations, but the validity of simulating the impacts of wind turbines by simply increasing the ocean surface drag needs further study. Significant warming or cooling remote from both the land and ocean installations, and alterations of the global distributions of rainfall and clouds also occur. These results are influenced by the competing effects of increases in roughness and decreases in wind speed on near-surface turbulent heat fluxes, the differing nature of land and ocean surface friction, and the dimensions of the installations parallel and perpendicular to the prevailing winds. These results are also dependent on the accuracy of the model used, and the realism of the methods applied to simulate wind turbines. Additional theory and new field observations will be required for their ultimate validation. Intermittency of wind power on daily, monthly and longer time scales as computed in these simulations and inferred from meteorological observations, poses a demand for one or more options to ensure reliability, including backup generation capacity, very long distance power transmission lines, and onsite energy storage, each with specific economic and/or technological challenges.

  5. Potential climatic impacts and reliability of very large-scale wind farms

    NASA Astrophysics Data System (ADS)

    Wang, C.; Prinn, R. G.

    2009-09-01

    Meeting future world energy needs while addressing climate change requires large-scale deployment of low or zero greenhouse gas (GHG) emission technologies such as wind energy. The widespread availability of wind power has fueled legitimate interest in this renewable energy source as one of the needed technologies. For very large-scale utilization of this resource, there are however potential environmental impacts, and also problems arising from its inherent intermittency, in addition to the present need to lower unit costs. To explore some of these issues, we use a three-dimensional climate model to simulate the potential climate effects associated with installation of wind-powered generators over vast areas of land or coastal ocean. Using wind turbines to meet 10% or more of global energy demand in 2100, could cause surface warming exceeding 1°C over land installations. In contrast, surface cooling exceeding 1°C is computed over ocean installations, but the validity of simulating the impacts of wind turbines by simply increasing the ocean surface drag needs further study. Significant warming or cooling remote from both the land and ocean installations, and alterations of the global distributions of rainfall and clouds also occur. These results are influenced by the competing effects of increases in roughness and decreases in wind speed on near-surface turbulent heat fluxes, the differing nature of land and ocean surface friction, and the dimensions of the installations parallel and perpendicular to the prevailing winds. These results are also dependent on the accuracy of the model used, and the realism of the methods applied to simulate wind turbines. Additional theory and new field observations will be required for their ultimate validation. Intermittency of wind power on daily, monthly and longer time scales as computed in these simulations and inferred from meteorological observations, poses a demand for one or more options to ensure reliability, including backup generation capacity, very long distance power transmission lines, and onsite energy storage, each with specific economic and/or technological challenges.

  6. CARRIER PREPARATION BUILDING MATERIALS HANDLING SYSTEM DESCRIPTION DOCUMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.F. Loros

    2000-06-28

    The Carrier Preparation Building Materials Handling System receives rail and truck shipping casks from the Carrier/Cask Transport System, and inspects and prepares the shipping casks for return to the Carrier/Cask Transport System. Carrier preparation operations for carriers/casks received at the surface repository include performing a radiation survey of the carrier and cask, removing/retracting the personnel barrier, measuring the cask temperature, removing/retracting the impact limiters, removing the cask tie-downs (if any), and installing the cask trunnions (if any). The shipping operations for carriers/casks leaving the surface repository include removing the cask trunnions (if any), installing the cask tie-downs (if any), installingmore » the impact limiters, performing a radiation survey of the cask, and installing the personnel barrier. There are four parallel carrier/cask preparation lines installed in the Carrier Preparation Building with two preparation bays in each line, each of which can accommodate carrier/cask shipping and receiving. The lines are operated concurrently to handle the waste shipping throughputs and to allow system maintenance operations. One remotely operated overhead bridge crane and one remotely operated manipulator is provided for each pair of carrier/cask preparation lines servicing four preparation bays. Remotely operated support equipment includes a manipulator and tooling and fixtures for removing and installing personnel barriers, impact limiters, cask trunnions, and cask tie-downs. Remote handling equipment is designed to facilitate maintenance, dose reduction, and replacement of interchangeable components where appropriate. Semi-automatic, manual, and backup control methods support normal, abnormal, and recovery operations. Laydown areas and equipment are included as required for transportation system components (e.g., personnel barriers and impact limiters), fixtures, and tooling to support abnormal and recovery operations. The Carrier Preparation Building Materials Handling System interfaces with the Cask/Carrier Transport System to move the carriers to and from the system. The Carrier Preparation Building System houses the equipment and provides the facility, utility, safety, communications, and auxiliary systems supporting operations and protecting personnel.« less

  7. [The parallelisms in of sound signal of domestic sheep and Northern fur seals].

    PubMed

    Nikol'skiĭ, A A; Lisitsina, T Iu

    2011-01-01

    The parallelisms in communicative behavior of domestic sheep and Northern fur seals within a herd are accompanied by parallelisms in parameters of sound signal, the calling scream. This signal ensures ties between babies and their mothers at a long distance. The basis of parallelisms is formed by amplitude modulation at two levels: the one being a direct amplitude modulation of the carrier frequency and the other--modulation of the carrier frequency oscillation. Parallelisms in the signal oscillatory process result in corresponding parallelisms in the structure of its frequency spectrum.

  8. An architecture for real-time vision processing

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong

    1994-01-01

    To study the feasibility of developing an architecture for real time vision processing, a task queue server and parallel algorithms for two vision operations were designed and implemented on an i860-based Mercury Computing System 860VS array processor. The proposed architecture treats each vision function as a task or set of tasks which may be recursively divided into subtasks and processed by multiple processors coordinated by a task queue server accessible by all processors. Each idle processor subsequently fetches a task and associated data from the task queue server for processing and posts the result to shared memory for later use. Load balancing can be carried out within the processing system without the requirement for a centralized controller. The author concludes that real time vision processing cannot be achieved without both sequential and parallel vision algorithms and a good parallel vision architecture.

  9. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  10. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  11. Performance evaluation of buried pipe installation : LTRC research project capsule 08-6GT.

    DOT National Transportation Integrated Search

    2008-03-01

    The Louisiana Department of : Transportation and Development : (LADOTD) is in the process of revising : the current specifications to obtain a : more cost efficient design and : installation of buried pipes for highway : infrastructure. It aims to de...

  12. Numerical modelling of processes that occur in the selective waste disassembly installation

    NASA Astrophysics Data System (ADS)

    Cherecheş, T.; Lixandru, P.; Dragnea, D.; Cherecheş, D. M.

    2017-08-01

    This paper is the result of the attempts of quantitative approach of some of the processes that are occurring in the selective fragmentation with high voltage pulses installation. It has been formulated a methodology which customizes the general methods for the issue of transient electric field in mixed environments. The electromagnetic processes inside the fragmentation installation, the initiation and formation of the discharge channels, the thermodynamic and mechanical effects in the process vessel are complex, transient and very quick. One of the underlying principles of the fragmentation process consists in the differentiated reaction of materials in an electric field. Generally in the process vessel there can be found together three types of materials: dielectrics, metal, electrolytes. The conductivity of dielectric materials is virtually zero. Metallic materials conduct very well through electronic conductivity. Electrolytes have a more modest conductivity since they conduct through electrochemical processes. The electrical current, in this case, is the movement of ions having sizes and the masses different from the electrons. Here, the electric current includes displacements of ions and molecules, collisions and chemical reactions. Part of the electrical field’s energy is absorbed by the electrolyte in the form of mechanical and chemical energy.

  13. 35 years of Ambient Noise: Can We Evidence Daily to Climatic Relative Velocity Changes ?

    NASA Astrophysics Data System (ADS)

    Lecocq, T.; Pedersen, H.; Brenguier, F.; Stammler, K.

    2014-12-01

    The broadband Grafenberg array (Germany) has been installed in 1976 and, thanks to visionary scientists and network maintainers, the continuous data acquired has been preserved until today. Using state of the art pre-processing and cross-correlation techniques, we are able to extract cross-correlation functions (CCF) between sensor pairs. It has been shown recently that, provided enough computation power is available, there is no need to define a reference CCF to compare all days to. Indeed, one can compare each day to all days, computing the "all-doublet". The number of calculations becomes huge (N vs ref = N calculations, N vs N= N*N), but the result, once inverted, is way more stable because of the N observations per day. This analysis has been done on a parallelized version of MSNoise (http://msnoise.org), running on the VEGA cluster hosted at the Université Libre de Bruxelles (ULB, Belgium). Here, we present preliminary results of the analysis of two stations, GRA1 and GRA2, the first two stations installed in March 1976. The interferogram (observation of the CCF through time, see Figure) already shows interesting features in the ballistic wave shape, highly correlated to the seasons. A reasonably high correlation can still be seen outside the ballistic arrival, after +-5 second lag time. The lag times between 5 and 25 seconds are then used to compute the dv/v using the all-doublet method. We expect to evidence daily to seasonal, or even to longer period dv/v variations and/or noise source position changes using this method. Once done with 1 sensor pair, the full data of the Grafenberg array will be used to enhance the resolution even more.

  14. Creating a Parallel Version of VisIt for Microsoft Windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlock, B J; Biagas, K S; Rawson, P L

    2011-12-07

    VisIt is a popular, free interactive parallel visualization and analysis tool for scientific data. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images or movies for presentations. VisIt was designed from the ground up to work on many scales of computers from modest desktops up to massively parallel clusters. VisIt is comprised of a set of cooperating programs. All programs can be run locally or in client/server mode in which some run locally and some run remotely on compute clusters. The VisIt program most able to harness today's computing powermore » is the VisIt compute engine. The compute engine is responsible for reading simulation data from disk, processing it, and sending results or images back to the VisIt viewer program. In a parallel environment, the compute engine runs several processes, coordinating using the Message Passing Interface (MPI) library. Each MPI process reads some subset of the scientific data and filters the data in various ways to create useful visualizations. By using MPI, VisIt has been able to scale well into the thousands of processors on large computers such as dawn and graph at LLNL. The advent of multicore CPU's has made parallelism the 'new' way to achieve increasing performance. With today's computers having at least 2 cores and in many cases up to 8 and beyond, it is more important than ever to deploy parallel software that can use that computing power not only on clusters but also on the desktop. We have created a parallel version of VisIt for Windows that uses Microsoft's MPI implementation (MSMPI) to process data in parallel on the Windows desktop as well as on a Windows HPC cluster running Microsoft Windows Server 2008. Initial desktop parallel support for Windows was deployed in VisIt 2.4.0. Windows HPC cluster support has been completed and will appear in the VisIt 2.5.0 release. We plan to continue supporting parallel VisIt on Windows so our users will be able to take full advantage of their multicore resources.« less

  15. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  16. Large research infrastructure for Earth-Ocean Science: Challenges of multidisciplinary integration across hardware, software, and people networks

    NASA Astrophysics Data System (ADS)

    Best, M.; Barnes, C. R.; Johnson, F.; Pautet, L.; Pirenne, B.; Founding Scientists Of Neptune Canada

    2010-12-01

    NEPTUNE Canada is operating a regional cabled ocean observatory across the northern Juan de Fuca Plate in the northeastern Pacific. Installation of the first suite of instruments and connectivity equipment was completed in 2009, so this system now provides the continuous power and bandwidth to collect integrated data on physical, chemical, geological, and biological gradients at temporal resolutions relevant to the dynamics of the earth-ocean system. The building of this facility integrates hardware, software, and people networks. Hardware progress to date includes: installation of the 800km powered fiber-optic backbone in the Fall of 2007; development of Nodes and Junction Boxes; acquisition/development and testing of Instruments; development of mobile instrument platforms such as a) a Vertical Profiler and b) a Crawler (University of Bremmen); and integration of over a thousand components into an operating subsea sensor system. Nodes, extension cables, junction boxes, and instruments were installed at 4 out of 5 locations in 2009; the fifth Node is instrumented in September 2010. In parallel, software and hardware systems are acquiring, archiving, and delivering the continuous real-time data through the internet to the world - already many terabytes of data. A web environment (Oceans 2.0) to combine this data access with analysis and visualization, collaborative tools, interoperability, and instrument control is being released. Finally, a network of scientists and technicians are contributing to the process in every phase, and data users already number in the thousands. Initial experiments were planned through a series of workshops and international proposal competitions. At inshore Folger Passage, Barkley Sound, understanding controls on biological productivity help evaluate the effects that marine processes have on fish and marine mammals. Experiments around Barkley Canyon allow quantification of changes in biological and chemical activity associated with nutrient and cross-shelf sediment transport around the shelf/slope break and through the canyon to the deep sea. There and north along the mid-continental slope, instruments on exposed and shallowly buried gas hydrates allow monitoring of changes in their distribution, structure, and venting, particularly related to earthquakes, slope failures and regional plate motions. Circulation obviation retrofit kits (CORKs) at mid-plate ODP 1026-7 monitor real-time changes in crustal temperature and pressure, particularly as they relate to events such as earthquakes, hydrothermal convection or regional plate strain. At Endeavour Ridge, complex interactions among volcanic, tectonic, hydrothermal and biological processes are quantified at the western edge of the Juan de Fuca plate. Across the network, high resolution seismic information elucidates tectonic processes such as earthquakes, and a tsunami system allows determination of open ocean tsunami amplitude, propagation direction, and speed. The infrastructure has further capacity for experiments to expand from this initial suite. Further information and opportunities can be found at http://www.neptunecanada.ca

  17. Closeup view of a Space Shuttle Main Engine (SSME) installed ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Close-up view of a Space Shuttle Main Engine (SSME) installed in position number one on the Orbiter Discovery. A ground-support mobile platform is in place below the engine to assist in technicians with the installation of the engine. This Photograph was taken in the Orbiter Processing Facility at the Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX

  18. Implementing a Parallel Image Edge Detection Algorithm Based on the Otsu-Canny Operator on the Hadoop Platform.

    PubMed

    Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun

    2018-01-01

    The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.

  19. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  20. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  1. Obsessive-compulsive tendencies are associated with a focused information processing strategy.

    PubMed

    Soref, Assaf; Dar, Reuven; Argov, Galit; Meiran, Nachshon

    2008-12-01

    The study examined the hypothesis that obsessive-compulsive (OC) tendencies are related to a reliance on focused and serial rather than a parallel, speed-oriented information processing style. Ten students with high OC tendencies and 10 students with low OC tendencies performed the flanker task, in which they were required to quickly classify a briefly presented target letter (S or H) that was flanked by compatible (e.g., SSSSS) or incompatible (e.g., HHSHH) noise letters. Participants received 4 blocks of 100 trials each, two with 50% compatible trials and two with 80% compatible trials and were informed of the probability of compatible trials before the beginning of each block. As predicted, high OC participants, as compared to low OC participants, had slower overall reaction time (RT) and lower tendency for parallel processing (defined as incompatible trials RT minus compatible trials RT). Low, more than high OC participants tended to adjust their focused/parallel processing including a shift towards parallel processing in blocks with 80% compatible trials and in trials following compatible trials. Implications of these results to the cognitive theory and therapy of OCD are discussed.

  2. Next Generation Parallelization Systems for Processing and Control of PDS Image Node Assets

    NASA Astrophysics Data System (ADS)

    Verma, R.

    2017-06-01

    We present next-generation parallelization tools to help Planetary Data System (PDS) Imaging Node (IMG) better monitor, process, and control changes to nearly 650 million file assets and over a dozen machines on which they are referenced or stored.

  3. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  4. Hierarchical Parallelization of Gene Differential Association Analysis

    PubMed Central

    2011-01-01

    Background Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Results Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. Conclusions The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels. PMID:21936916

  5. Hierarchical parallelization of gene differential association analysis.

    PubMed

    Needham, Mark; Hu, Rui; Dwarkadas, Sandhya; Qiu, Xing

    2011-09-21

    Microarray gene differential expression analysis is a widely used technique that deals with high dimensional data and is computationally intensive for permutation-based procedures. Microarray gene differential association analysis is even more computationally demanding and must take advantage of multicore computing technology, which is the driving force behind increasing compute power in recent years. In this paper, we present a two-layer hierarchical parallel implementation of gene differential association analysis. It takes advantage of both fine- and coarse-grain (with granularity defined by the frequency of communication) parallelism in order to effectively leverage the non-uniform nature of parallel processing available in the cutting-edge systems of today. Our results show that this hierarchical strategy matches data sharing behavior to the properties of the underlying hardware, thereby reducing the memory and bandwidth needs of the application. The resulting improved efficiency reduces computation time and allows the gene differential association analysis code to scale its execution with the number of processors. The code and biological data used in this study are downloadable from http://www.urmc.rochester.edu/biostat/people/faculty/hu.cfm. The performance sweet spot occurs when using a number of threads per MPI process that allows the working sets of the corresponding MPI processes running on the multicore to fit within the machine cache. Hence, we suggest that practitioners follow this principle in selecting the appropriate number of MPI processes and threads within each MPI process for their cluster configurations. We believe that the principles of this hierarchical approach to parallelization can be utilized in the parallelization of other computationally demanding kernels.

  6. Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo

    NASA Astrophysics Data System (ADS)

    Khosravi, Ebrahim

    1998-12-01

    This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.

  7. Real-time POD-CFD Wind-Load Calculator for PV Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huayamave, Victor; Divo, Eduardo; Ceballos, Andres

    The primary objective of this project is to create an accurate web-based real-time wind-load calculator. This is of paramount importance for (1) the rapid and accurate assessments of the uplift and downforce loads on a PV mounting system, (2) identifying viable solutions from available mounting systems, and therefore helping reduce the cost of mounting hardware and installation. Wind loading calculations for structures are currently performed according to the American Society of Civil Engineers/ Structural Engineering Institute Standard ASCE/SEI 7; the values in this standard were calculated from simplified models that do not necessarily take into account relevant characteristics such asmore » those from full 3D effects, end effects, turbulence generation and dissipation, as well as minor effects derived from shear forces on installation brackets and other accessories. This standard does not include provisions that address the special requirements of rooftop PV systems, and attempts to apply this standard may lead to significant design errors as wind loads are incorrectly estimated. Therefore, an accurate calculator would be of paramount importance for the preliminary assessments of the uplift and downforce loads on a PV mounting system, identifying viable solutions from available mounting systems, and therefore helping reduce the cost of the mounting system and installation. The challenge is that although a full-fledged three-dimensional computational fluid dynamics (CFD) analysis would properly and accurately capture the complete physical effects of air flow over PV systems, it would be impractical for this tool, which is intended to be a real-time web-based calculator. CFD routinely requires enormous computation times to arrive at solutions that can be deemed accurate and grid-independent even in powerful and massively parallel computer platforms. This work is expected not only to accelerate solar deployment nationwide, but also help reach the SunShot Initiative goals of reducing the total installed cost of solar energy systems by 75%. The largest percentage of the total installed cost of solar energy system is associated with balance of system cost, with up to 40% going to “soft” costs; which include customer acquisition, financing, contracting, permitting, interconnection, inspection, installation, performance, operations, and maintenance. The calculator that is being developed will provide wind loads in real-time for any solar system designs and suggest the proper installation configuration and hardware; and therefore, it is anticipated to reduce system design, installation and permitting costs.« less

  8. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  9. Voltage dips at the terminals of wind power installations

    NASA Astrophysics Data System (ADS)

    Bollen, Math H. J.; Olguin, Gabriel; Martins, Marcia

    2005-07-01

    This article gives an overview of the kind of voltage dips that can be expected at the terminals of a wind power installation. The overview is based on the study of those dips at the terminals of industrial installations and provides a guideline for the testing of wind power installations against voltage dips. For voltage dips due to faults, a classification into different types is presented. Five types appear at the terminals of sensitive equipment and thus have to be included when testing the wind power installation against disturbances coming from the grid. A distinction is made between installations connected at transmission level and those connected at distribution level. For the latter the phase angle jump has to be considered. Dips due to other causes (motor, transformer and capacitor switching) are briefly discussed as well as the voltage recovery after a dip. Finally some thoughts are presented on the way in which voltage tolerance requirements should be part of the design process for wind power installations. Copyright

  10. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  11. Process optimization using combinatorial design principles: parallel synthesis and design of experiment methods.

    PubMed

    Gooding, Owen W

    2004-06-01

    The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.

  12. Options for Parallelizing a Planning and Scheduling Algorithm

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Estlin, Tara A.; Bornstein, Benjamin D.

    2011-01-01

    Space missions have a growing interest in putting multi-core processors onboard spacecraft. For many missions processing power significantly slows operations. We investigate how continual planning and scheduling algorithms can exploit multi-core processing and outline different potential design decisions for a parallelized planning architecture. This organization of choices and challenges helps us with an initial design for parallelizing the CASPER planning system for a mesh multi-core processor. This work extends that presented at another workshop with some preliminary results.

  13. Airborne-Fiber Optics Manufacturing Technology, Aircraft Installation Processes.

    DTIC Science & Technology

    1980-08-19

    but the impact is minor. With simpler equipment and techniques there may be a J’ 1 -, long- term savings potential. Overall costs and benefits of...4/72 1 * lh427 ,. . . ... .. - - . .. . 4.0 ASSEMBLY OF FIBER OPTIC CABLES AND HARNESSES 4.1 CABLE IDENTIFICATION (Marking) 4.1.1 Physically identify...FIBER OPTICS MANUFACTURING TECHNOLOGY Aircraft Installation Processes G Kosmos ~ ~ 19 August 1980 I 2 Final Report: May 1978 - June 1980 . 1 Prepared for

  14. Re-forming supercritical quasi-parallel shocks. I - One- and two-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Thomas, V. A.; Winske, D.; Omidi, N.

    1990-01-01

    The process of reforming supercritical quasi-parallel shocks is investigated using one-dimensional and two-dimensional hybrid (particle ion, massless fluid electron) simulations both of shocks and of simpler two-stream interactions. It is found that the supercritical quasi-parallel shock is not steady. Instread of a well-defined shock ramp between upstream and downstream states that remains at a fixed position in the flow, the ramp periodically steepens, broadens, and then reforms upstream of its former position. It is concluded that the wave generation process is localized at the shock ramp and that the reformation process proceeds in the absence of upstream perturbations intersecting the shock.

  15. Transport and installation of the Dark Energy Survey CCD imager

    NASA Astrophysics Data System (ADS)

    Derylo, Greg; Chi, Edward; Diehl, H. Thomas; Estrada, Juan; Flaugher, Brenna; Schultz, Ken

    2012-09-01

    The Dark Energy Survey CCD imager was constructed at the Fermi National Accelerator Laboratory and delivered to the Cerro Tololo Inter-American Observatory in Chile for installation onto the Blanco 4m telescope. Several efforts are described relating to preparation of the instrument for transport, development and testing of a shipping crate designed to minimize transportation loads transmitted to the camera, and inspection of the imager upon arrival at the observatory. Transportation loads were monitored and are described. For installation of the imager at the telescope prime focus, where it mates with its previously-installed optical corrector, specialized tooling was developed to safely lift, support, and position the vessel. The installation and removal processes were tested on the Telescope Simulator mockup at FNAL, thus minimizing technical and schedule risk for the work performed at CTIO. Final installation of the imager is scheduled for August 2012.

  16. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  17. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  18. A Debugger for Computational Grid Applications

    NASA Technical Reports Server (NTRS)

    Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.

  19. Psychodrama: A Creative Approach for Addressing Parallel Process in Group Supervision

    ERIC Educational Resources Information Center

    Hinkle, Michelle Gimenez

    2008-01-01

    This article provides a model for using psychodrama to address issues of parallel process during group supervision. Information on how to utilize the specific concepts and techniques of psychodrama in relation to group supervision is discussed. A case vignette of the model is provided.

  20. Telemetry downlink interfaces and level-zero processing

    NASA Technical Reports Server (NTRS)

    Horan, S.; Pfeiffer, J.; Taylor, J.

    1991-01-01

    The technical areas being investigated are as follows: (1) processing of space to ground data frames; (2) parallel architecture performance studies; and (3) parallel programming techniques. Additionally, the University administrative details and the technical liaison between New Mexico State University and Goddard Space Flight Center are addressed.

  1. Language Classification using N-grams Accelerated by FPGA-based Bloom Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, A; Gokhale, M

    N-Gram (n-character sequences in text documents) counting is a well-established technique used in classifying the language of text in a document. In this paper, n-gram processing is accelerated through the use of reconfigurable hardware on the XtremeData XD1000 system. Our design employs parallelism at multiple levels, with parallel Bloom Filters accessing on-chip RAM, parallel language classifiers, and parallel document processing. In contrast to another hardware implementation (HAIL algorithm) that uses off-chip SRAM for lookup, our highly scalable implementation uses only on-chip memory blocks. Our implementation of end-to-end language classification runs at 85x comparable software and 1.45x the competing hardware design.

  2. Parallel processing implementation for the coupled transport of photons and electrons using OpenMP

    NASA Astrophysics Data System (ADS)

    Doerner, Edgardo

    2016-05-01

    In this work the use of OpenMP to implement the parallel processing of the Monte Carlo (MC) simulation of the coupled transport for photons and electrons is presented. This implementation was carried out using a modified EGSnrc platform which enables the use of the Microsoft Visual Studio 2013 (VS2013) environment, together with the developing tools available in the Intel Parallel Studio XE 2015 (XE2015). The performance study of this new implementation was carried out in a desktop PC with a multi-core CPU, taking as a reference the performance of the original platform. The results were satisfactory, both in terms of scalability as parallelization efficiency.

  3. Parallel Processing Strategies of the Primate Visual System

    PubMed Central

    Nassi, Jonathan J.; Callaway, Edward M.

    2009-01-01

    Preface Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated upon and integrated within the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are all used by the visual system to recover the rich detail of our visual surroundings. PMID:19352403

  4. Design of high-performance parallelized gene predictors in MATLAB.

    PubMed

    Rivard, Sylvain Robert; Mailloux, Jean-Gabriel; Beguenane, Rachid; Bui, Hung Tien

    2012-04-10

    This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel's algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps) whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.

  5. Broadband Ground Motion Observation and Simulation for the 2016 Kumamoto Earthquake

    NASA Astrophysics Data System (ADS)

    Miyake, H.; Chimoto, K.; Yamanaka, H.; Tsuno, S.; Korenaga, M.; Yamada, N.; Matsushima, T.; Miyakawa, K.

    2016-12-01

    During the 2016 Kumamoto earthquake, strong motion data were widely recorded by the permanent dense triggered strong motion network of K-NET/KiK-net and seismic intensity meters installed by local government and JMA. Seismic intensities close to the MMI 9-10 are recorded twice at the Mashiki town, and once at the Nishihara village and KiK-net Mashiki (KMMH16 ground surface). Near-fault records indicate extreme ground motion exceeding 400 cm/s in 5% pSv at a period of 1 s for the Mashiki town and 3-4 s for the Nishihara village. Fault parallel velocity components are larger between the Mashiki town and the Nishihara village, on the other hand, fault normal velocity components are larger inside the caldera of the Aso volcano. The former indicates rupture passed through along-strike stations, and the latter stations located at the forward rupture direction (e.g., Miyatake, 1999). In addition to the permanent observation, temporary continuous strong motion stations were installed just after the earthquake in the Kumamoto city, Mashiki town, Nishihara village, Minami-Aso village, and Aso town, (e.g., Chimoto et al., 2016; Tsuno et al., 2016; Yamanaka et al. 2016). This study performs to estimate strong motion generation areas for the 2016 Kumamoto earthquake sequence using the empirical Green's function method, then to simulate broadband ground motions for both the permanent and temporary strong motion stations. Currently the target period range is between 0.1 s to 5-10 s due to the signal-to-noise ratio of element earthquakes used for the empirical Green's functions. We also care fault dimension parameters N within 4 to 10 to avoid spectral sags and artificial periodicity. The simulated seismic intensities as well as fault normal and parallel velocity components will be discussed.

  6. Observing with HST V: Improvements to the Scheduling of HST Parallel Observations

    NASA Astrophysics Data System (ADS)

    Taylor, D. K.; Vanorsow, D.; Lucks, M.; Henry, R.; Ratnatunga, K.; Patterson, A.

    1994-12-01

    Recent improvements to the Hubble Space Telescope (HST) ground system have significantly increased the frequency of pure parallel observations, i.e. the simultaneous use of multiple HST instruments by different observers. Opportunities for parallel observations are limited by a variety of timing, hardware, and scientific constraints. Formerly, such opportunities were heuristically predicted prior to the construction of the primary schedule (or calendar), and lack of complete information resulted in high rates of scheduling failures and missed opportunities. In the current process the search for parallel opportunities is delayed until the primary schedule is complete, at which point new software tools are employed to identify places where parallel observations are supported. The result has been a considerable increase in parallel throughput. A new technique, known as ``parallel crafting,'' is currently under development to streamline further the parallel scheduling process. This radically new method will replace the standard exposure logsheet with a set of abstract rules from which observation parameters will be constructed ``on the fly'' to best match the constraints of the parallel opportunity. Currently, parallel observers must specify a huge (and highly redundant) set of exposure types in order to cover all possible types of parallel opportunities. Crafting rules permit the observer to express timing, filter, and splitting preferences in a far more succinct manner. The issue of coordinated parallel observations (same PI using different instruments simultaneously), long a troublesome aspect of the ground system, is also being addressed. For Cycle 5, the Phase II Proposal Instructions now have an exposure-level PAR WITH special requirement. While only the primary's alignment will be scheduled on the calendar, new commanding will provide for parallel exposures with both instruments.

  7. Parabolic Trough Collector Cost Update for the System Advisor Model (SAM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurup, Parthiv; Turchi, Craig S.

    2015-11-01

    This report updates the baseline cost for parabolic trough solar fields in the United States within NREL's System Advisor Model (SAM). SAM, available at no cost at https://sam.nrel.gov/, is a performance and financial model designed to facilitate decision making for people involved in the renewable energy industry. SAM is the primary tool used by NREL and the U.S. Department of Energy (DOE) for estimating the performance and cost of concentrating solar power (CSP) technologies and projects. The study performed a bottom-up build and cost estimate for two state-of-the-art parabolic trough designs -- the SkyTrough and the Ultimate Trough. The SkyTroughmore » analysis estimated the potential installed cost for a solar field of 1500 SCAs as $170/m 2 +/- $6/m 2. The investigation found that SkyTrough installed costs were sensitive to factors such as raw aluminum alloy cost and production volume. For example, in the case of the SkyTrough, the installed cost would rise to nearly $210/m 2 if the aluminum alloy cost was $1.70/lb instead of $1.03/lb. Accordingly, one must be aware of fluctuations in the relevant commodities markets to track system cost over time. The estimated installed cost for the Ultimate Trough was only slightly higher at $178/m 2, which includes an assembly facility of $11.6 million amortized over the required production volume. Considering the size and overall cost of a 700 SCA Ultimate Trough solar field, two parallel production lines in a fully covered assembly facility, each with the specific torque box, module and mirror jigs, would be justified for a full CSP plant.« less

  8. Evaluation Metrics for the Paragon XP/S-15

    NASA Technical Reports Server (NTRS)

    Traversat, Bernard; McNab, David; Nitzberg, Bill; Fineberg, Sam; Blaylock, Bruce T. (Technical Monitor)

    1993-01-01

    On February 17th 1993, the Numerical Aerodynamic Simulation (NAS) facility located at the NASA Ames Research Center installed a 224 node Intel Paragon XP/S-15 system. After its installation, the Paragon was found to be in a very immature state and was unable to support a NAS users' workload, composed of a wide range of development and production activities. As a first step towards addressing this problem, we implemented a set of metrics to objectively monitor the system as operating system and hardware upgrades were installed. The metrics were designed to measure four aspects of the system that we consider essential to support our workload: availability, utilization, functionality, and performance. This report presents the metrics collected from February 1993 to August 1993. Since its installation, the Paragon availability has improved from a low of 15% uptime to a high of 80%, while its utilization has remained low. Functionality and performance have improved from merely running one of the NAS Parallel Benchmarks to running all of them faster (between 1 and 2 times) than on the iPSC/860. In spite of the progress accomplished, fundamental limitations of the Paragon operating system are restricting the Paragon from supporting the NAS workload. The maximum operating system message passing (NORMA IPC) bandwidth was measured at 11 Mbytes/s, well below the peak hardware bandwidth (175 Mbytes/s), limiting overall virtual memory and Unix services (i.e. Disk and HiPPI I/O) performance. The high NX application message passing latency (184 microns), three times than on the iPSC/860, was found to significantly degrade performance of applications relying on small message sizes. The amount of memory available for an application was found to be approximately 10 Mbytes per node, indicating that the OS is taking more space than anticipated (6 Mbytes per node).

  9. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  10. Parallel design patterns for a low-power, software-defined compressed video encoder

    NASA Astrophysics Data System (ADS)

    Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar

    2011-06-01

    Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.

  11. KSC-05pd2488

    NASA Image and Video Library

    2005-11-10

    KENNEDY SPACE CENTER, FLA. - In NASA Kennedy Space Center’s Orbiter Processing Facility Bay 3, a remote manipulator system, or space shuttle arm, previously installed on the orbiter Atlantis, is being installed in Discovery’s payload bay. The arms were switched because the arm that was installed on Atlantis has special instrumentation to gather loads data from the second return-to-flight mission, STS-121. Discovery is the designated orbiter to fly on STS-121. scheduled to launch no earlier than May 2006.

  12. KSC-05pd2489

    NASA Image and Video Library

    2005-11-10

    KENNEDY SPACE CENTER, FLA. - In NASA Kennedy Space Center’s Orbiter Processing Facility Bay 3, a remote manipulator system, or space shuttle arm, previously installed on the orbiter Atlantis, is being installed in Discovery’s payload bay. The arms were switched because the arm that was installed on Atlantis has special instrumentation to gather loads data from the second return-to-flight mission, STS-121. Discovery is the designated orbiter to fly on STS-121. scheduled to launch no earlier than May 2006.

  13. KSC-05pd2491

    NASA Image and Video Library

    2005-11-10

    KENNEDY SPACE CENTER, FLA. - In NASA Kennedy Space Center’s Orbiter Processing Facility Bay 3, technicians install a remote manipulator system, or space shuttle arm, previously installed on the orbiter Atlantis, in Discovery’s payload bay. The arms were switched because the arm that was installed on Atlantis has special instrumentation to gather loads data from the second return-to-flight mission, STS-121. Discovery is the designated orbiter to fly on STS-121. scheduled to launch no earlier than May 2006.

  14. KSC-05pd2490

    NASA Image and Video Library

    2005-11-10

    KENNEDY SPACE CENTER, FLA. - In NASA Kennedy Space Center’s Orbiter Processing Facility Bay 3, technicians install a remote manipulator system, or space shuttle arm, previously installed on the orbiter Atlantis, in Discovery’s payload bay. The arms were switched because the arm that was installed on Atlantis has special instrumentation to gather loads data from the second return-to-flight mission, STS-121. Discovery is the designated orbiter to fly on STS-121. scheduled to launch no earlier than May 2006.

  15. The choice of primary energy source including PV installation for providing electric energy to a public utility building - a case study

    NASA Astrophysics Data System (ADS)

    Radomski, Bartosz; Ćwiek, Barbara; Mróz, Tomasz M.

    2017-11-01

    The paper presents multicriteria decision aid analysis of the choice of PV installation providing electric energy to a public utility building. From the energy management point of view electricity obtained by solar radiation has become crucial renewable energy source. Application of PV installations may occur a profitable solution from energy, economic and ecologic point of view for both existing and newly erected buildings. Featured variants of PV installations have been assessed by multicriteria analysis based on ANP (Analytic Network Process) method. Technical, economical, energy and environmental criteria have been identified as main decision criteria. Defined set of decision criteria has an open character and can be modified in the dialog process between the decision-maker and the expert - in the present case, an expert in planning of development of energy supply systems. The proposed approach has been used to evaluate three variants of PV installation acceptable for existing educational building located in Poznań, Poland - the building of Faculty of Chemical Technology, Poznań University of Technology. Multi-criteria analysis based on ANP method and the calculation software Super Decisions has proven to be an effective tool for energy planning, leading to the indication of the recommended variant of PV installation in existing and newly erected public buildings. Achieved results show prospects and possibilities of rational renewable energy usage as complex solution to public utility buildings.

  16. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Parallel volume ray-casting for unstructured-grid data on distributed-memory architectures

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu

    1995-01-01

    As computing technology continues to advance, computational modeling of scientific and engineering problems produces data of increasing complexity: large in size and unstructured in shape. Volume visualization of such data is a challenging problem. This paper proposes a distributed parallel solution that makes ray-casting volume rendering of unstructured-grid data practical. Both the data and the rendering process are distributed among processors. At each processor, ray-casting of local data is performed independent of the other processors. The global image composing processes, which require inter-processor communication, are overlapped with the local ray-casting processes to achieve maximum parallel efficiency. This algorithm differs from previous ones in four ways: it is completely distributed, less view-dependent, reasonably scalable, and flexible. Without using dynamic load balancing, test results on the Intel Paragon using from two to 128 processors show, on average, about 60% parallel efficiency.

  18. Traditional Chinese medicine on the effects of low-intensity laser irradiation on cells

    NASA Astrophysics Data System (ADS)

    Liu, Timon C.; Duan, Rui; Li, Yan; Cai, Xiongwei

    2002-04-01

    In previous paper, process-specific times (PSTs) are defined by use of molecular reaction dynamics and time quantum theory established by TCY Liu et al., and the change of PSTs representing two weakly nonlinearly coupled bio-processes are shown to be parallel, which is called time parallel principle (TPP). The PST of a physiological process (PP) is called physiological time (PT). After the PTs of two PPs are compared with their Yin-Yang property of traditional Chinese medicine (TCM), the PST model of Yin and Yang (YPTM) was put forward: for two related processes, the process of small PST is Yin, and the other process is Yang. The Yin-Yang parallel principle (YPP) was put forward in terms of YPTM and TPP, which is the fundamental principle of TCM. In this paper, we apply it to study TCM on the effects of low intensity laser on cells, and successfully explained observed phenomena.

  19. Myria: Scalable Analytics as a Service

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  20. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  1. Parallel image reconstruction for 3D positron emission tomography from incomplete 2D projection data

    NASA Astrophysics Data System (ADS)

    Guerrero, Thomas M.; Ricci, Anthony R.; Dahlbom, Magnus; Cherry, Simon R.; Hoffman, Edward T.

    1993-07-01

    The problem of excessive computational time in 3D Positron Emission Tomography (3D PET) reconstruction is defined, and we present an approach for solving this problem through the construction of an inexpensive parallel processing system and the adoption of the FAVOR algorithm. Currently, the 3D reconstruction of the 610 images of a total body procedure would require 80 hours and the 3D reconstruction of the 620 images of a dynamic study would require 110 hours. An inexpensive parallel processing system for 3D PET reconstruction is constructed from the integration of board level products from multiple vendors. The system achieves its computational performance through the use of 6U VME four i860 processor boards, the processor boards from five manufacturers are discussed from our perspective. The new 3D PET reconstruction algorithm FAVOR, FAst VOlume Reconstructor, that promises a substantial speed improvement is adopted. Preliminary results from parallelizing FAVOR are utilized in formulating architectural improvements for this problem. In summary, we are addressing the problem of excessive computational time in 3D PET image reconstruction, through the construction of an inexpensive parallel processing system and the parallelization of a 3D reconstruction algorithm that uses the incomplete data set that is produced by current PET systems.

  2. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE PAGES

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; ...

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order in amore » 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  3. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  4. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  5. Parallel Guessing: A Strategy for High-Speed Computation

    DTIC Science & Technology

    1984-09-19

    for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or

  6. 76 FR 2853 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Infrastructure State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-18

    ... technical analysis submitted for parallel-processing by DNREC on December 9, 2010, to address significant... technical analysis submitted by DNREC for parallel-processing on December 9, 2010, to satisfy the... consists of a technical analysis that provides detailed support for Delaware's position that it has...

  7. Tracking the Continuity of Language Comprehension: Computer Mouse Trajectories Suggest Parallel Syntactic Processing

    ERIC Educational Resources Information Center

    Farmer, Thomas A.; Cargill, Sarah A.; Hindy, Nicholas C.; Dale, Rick; Spivey, Michael J.

    2007-01-01

    Although several theories of online syntactic processing assume the parallel activation of multiple syntactic representations, evidence supporting simultaneous activation has been inconclusive. Here, the continuous and non-ballistic properties of computer mouse movements are exploited, by recording their streaming x, y coordinates to procure…

  8. Parallel and Serial Processes in Visual Search

    ERIC Educational Resources Information Center

    Thornton, Thomas L.; Gilden, David L.

    2007-01-01

    A long-standing issue in the study of how people acquire visual information centers around the scheduling and deployment of attentional resources: Is the process serial, or is it parallel? A substantial empirical effort has been dedicated to resolving this issue. However, the results remain largely inconclusive because the methodologies that have…

  9. Using Motivational Interviewing Techniques to Address Parallel Process in Supervision

    ERIC Educational Resources Information Center

    Giordano, Amanda; Clarke, Philip; Borders, L. DiAnne

    2013-01-01

    Supervision offers a distinct opportunity to experience the interconnection of counselor-client and counselor-supervisor interactions. One product of this network of interactions is parallel process, a phenomenon by which counselors unconsciously identify with their clients and subsequently present to their supervisors in a similar fashion…

  10. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  11. What Multilevel Parallel Programs do when you are not Watching: A Performance Analysis Case Study Comparing MPI/OpenMP, MLP, and Nested OpenMP

    NASA Technical Reports Server (NTRS)

    Jost, Gabriele; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    With the current trend in parallel computer architectures towards clusters of shared memory symmetric multi-processors, parallel programming techniques have evolved that support parallelism beyond a single level. When comparing the performance of applications based on different programming paradigms, it is important to differentiate between the influence of the programming model itself and other factors, such as implementation specific behavior of the operating system (OS) or architectural issues. Rewriting-a large scientific application in order to employ a new programming paradigms is usually a time consuming and error prone task. Before embarking on such an endeavor it is important to determine that there is really a gain that would not be possible with the current implementation. A detailed performance analysis is crucial to clarify these issues. The multilevel programming paradigms considered in this study are hybrid MPI/OpenMP, MLP, and nested OpenMP. The hybrid MPI/OpenMP approach is based on using MPI [7] for the coarse grained parallelization and OpenMP [9] for fine grained loop level parallelism. The MPI programming paradigm assumes a private address space for each process. Data is transferred by explicitly exchanging messages via calls to the MPI library. This model was originally designed for distributed memory architectures but is also suitable for shared memory systems. The second paradigm under consideration is MLP which was developed by Taft. The approach is similar to MPi/OpenMP, using a mix of coarse grain process level parallelization and loop level OpenMP parallelization. As it is the case with MPI, a private address space is assumed for each process. The MLP approach was developed for ccNUMA architectures and explicitly takes advantage of the availability of shared memory. A shared memory arena which is accessible by all processes is required. Communication is done by reading from and writing to the shared memory.

  12. SCORPIO: A Scalable Two-Phase Parallel I/O Library With Application To A Large Scale Subsurface Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Sripathi, Vamsi; Mills, Richard T

    2013-01-01

    Inefficient parallel I/O is known to be a major bottleneck among scientific applications employed on supercomputers as the number of processor cores grows into the thousands. Our prior experience indicated that parallel I/O libraries such as HDF5 that rely on MPI-IO do not scale well beyond 10K processor cores, especially on parallel file systems (like Lustre) with single point of resource contention. Our previous optimization efforts for a massively parallel multi-phase and multi-component subsurface simulator (PFLOTRAN) led to a two-phase I/O approach at the application level where a set of designated processes participate in the I/O process by splitting themore » I/O operation into a communication phase and a disk I/O phase. The designated I/O processes are created by splitting the MPI global communicator into multiple sub-communicators. The root process in each sub-communicator is responsible for performing the I/O operations for the entire group and then distributing the data to rest of the group. This approach resulted in over 25X speedup in HDF I/O read performance and 3X speedup in write performance for PFLOTRAN at over 100K processor cores on the ORNL Jaguar supercomputer. This research describes the design and development of a general purpose parallel I/O library, SCORPIO (SCalable block-ORiented Parallel I/O) that incorporates our optimized two-phase I/O approach. The library provides a simplified higher level abstraction to the user, sitting atop existing parallel I/O libraries (such as HDF5) and implements optimized I/O access patterns that can scale on larger number of processors. Performance results with standard benchmark problems and PFLOTRAN indicate that our library is able to maintain the same speedups as before with the added flexibility of being applicable to a wider range of I/O intensive applications.« less

  13. Visualization Co-Processing of a CFD Simulation

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    1999-01-01

    OVERFLOW, a widely used CFD simulation code, is combined with a visualization system, pV3, to experiment with an environment for simulation/visualization co-processing on a SGI Origin 2000 computer(O2K) system. The shared memory version of the solver is used with the O2K 'pfa' preprocessor invoked to automatically discover parallelism in the source code. No other explicit parallelism is enabled. In order to study the scaling and performance of the visualization co-processing system, sample runs are made with different processor groups in the range of 1 to 254 processors. The data exchange between the visualization system and the simulation system is rapid enough for user interactivity when the problem size is small. This shared memory version of OVERFLOW, with minimal parallelization, does not scale well to an increasing number of available processors. The visualization task takes about 18 to 30% of the total processing time and does not appear to be a major contributor to the poor scaling. Improper load balancing and inter-processor communication overhead are contributors to this poor performance. Work is in progress which is aimed at obtaining improved parallel performance of the solver and removing the limitations of serial data transfer to pV3 by examining various parallelization/communication strategies, including the use of the explicit message passing.

  14. PaFlexPepDock: parallel ab-initio docking of peptides onto their receptors with full flexibility based on Rosetta.

    PubMed

    Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang

    2014-01-01

    Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.

  15. Solvent extraction employing a static micromixer: a simple, robust and versatile technology for the microencapsulation of proteins.

    PubMed

    Freitas, S; Walz, A; Merkle, H P; Gander, B

    2003-01-01

    The potential of a static micromixer for the production of protein-loaded biodegradable polymeric microspheres by a modified solvent extraction process was examined. The mixer consists of an array of microchannels and features a simple set-up, consumes only very small space, lacks moving parts and offers simple control of the microsphere size. Scale-up from lab bench to industrial production is easily feasible through parallel installation of a sufficient number of micromixers ('number-up'). Poly(lactic-co-glycolic acid) microspheres loaded with a model protein, bovine serum albumin (BSA), were prepared. The influence of various process and formulation parameters on the characteristics of the microspheres was examined with special focus on particle size distribution. Microspheres with monomodal size distributions having mean diameters of 5-30 micro m were produced with excellent reproducibility. Particle size distributions were largely unaffected by polymer solution concentration, polymer type and nominal BSA load, but depended on the polymer solvent. Moreover, particle mean diameters could be varied in a considerable range by modulating the flow rates of the mixed fluids. BSA encapsulation efficiencies were mostly in the region of 75-85% and product yields ranged from 90-100%. Because of its simple set-up and its suitability for continuous production, static micromixing is suggested for the automated and aseptic production of protein-loaded microspheres.

  16. A heating experiment in the argillites in the Meuse/Haute-Marne underground research laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wileveau, Yannick; Su, Kun; Ghoreychi, Mehdi

    2007-07-01

    A heating experiment named TER is being conducted with the objectives to identify the thermal properties, as well as to enhance the knowledge on THM processes in the Callovo-Oxfordian clay at the Meuse/Haute Marne Underground Research Laboratory (France). The in situ experiment has being switched on from early 2006. The heater, 3 m length, is designed to inject the power in the undisturbed zone at 6 m from the gallery wall. A heater packer is inflated in a metallic tubing. During the experiment, numerous sensors are emplaced in the surrounding rock and are experienced to monitor the evolution in temperature,more » pore-water pressure and deformation. The models and numerical codes applied should be validated by comparing the modeling results with the measurements. In parallel, some lab testing have been achieved in order to compare the results given with two different scales (cm up to meter scale). In this paper, we present a general description of the TER experiment with installation of the heater equipment and the surrounding instrumentation. Details of the in situ measurements of temperature, pore-pressure and strain evolutions are given for the several heating and cooling phases. The thermal conductivity and some predominant parameters in THM processes (as linear thermal expansion coefficient and permeability) will be discussed. (authors)« less

  17. Active learning in the space engineering education at Technical University of Madrid

    NASA Astrophysics Data System (ADS)

    Rodríguez, Jacobo; Laverón-Simavilla, Ana; Lapuerta, Victoria; Ezquerro Navarro, Jose Miguel; Cordero-Gracia, Marta

    This work describes the innovative activities performed in the field of space education at the Technical University of Madrid (UPM), in collaboration with the center engaged by the European Space Agency (ESA) in Spain to support the operations for scientific experiments on board the International Space Station (E-USOC). These activities have been integrated along the last academic year of the Aerospatiale Engineering degree. A laboratory has been created, where the students have to validate and integrate the subsystems of a microsatellite by using demonstrator satellites. With the acquired skills, the students participate in a training process centered on Project Based Learning, where the students work in groups to perform the conceptual design of a space mission, being each student responsible for the design of a subsystem of the satellite and another one responsible of the mission design. In parallel, the students perform a training using a ground station, installed at the E-USOC building, which allow them to learn how to communicate with satellites, how to download telemetry and how to process the data. This also allows students to learn how the E-USOC works. Two surveys have been conducted to evaluate the impact of these techniques in the student engineering skills and to know the degree of satisfaction of students with respect to the use of these learning methodologies.

  18. Laser Therapy and Pain-Related Behavior after Injury of the Inferior Alveolar Nerve: Possible Involvement of Neurotrophins

    PubMed Central

    de Oliveira Martins, Daniel; Martinez dos Santos, Fabio; Evany de Oliveira, Mara; de Britto, Luiz R.G.; Benedito Dias Lemos, José

    2013-01-01

    Abstract Nerve-related complications have been frequently reported in dental procedures, and a very frequent type of occurrence involves the inferior alveolar nerve (IAN). The nerve injury in humans often results in persistent pain accompanied by allodynia and hyperalgesia. In this investigation, we used an experimental IAN injury in rats, which was induced by a Crile hemostatic clamp, to evaluate the effects of laser therapy on nerve repair. We also studied the nociceptive behavior (von Frey hair test) before and after the injury and the behavioral effects of treatment with laser therapy (emitting a wavelength of 904 nm, output power of 70 Wpk, a spot area of ∼0.1 cm2, frequency of 9500 Hz, pulse time 60 ns and an energy density of 6 J/cm2). As neurotrophins are essential for the process of nerve regeneration, we used immunoblotting techniques to preliminarily examine the effects of laser therapy on the expression of nerve growth factor (NGF) and brain-derived neurotrophic factor (BDNF). The injured animals treated with laser exhibited an improved nociceptive behavior. In irradiated animals, there was an enhanced expression of NGF (53%) and a decreased BDNF expression (40%) after laser therapy. These results indicate that BDNF plays a locally crucial role in pain-related behavior development after IAN injury, increasing after lesions (in parallel to the installation of pain behavior) and decreasing with laser therapy (in parallel to the improvement of pain behavior). On the other hand, NGF probably contributes to the repair of nerve tissue, in addition to improving the pain-related behavior. PMID:23190308

  19. Array Databases: Agile Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Misev, D.

    2015-12-01

    Gridded data, such as images, image timeseries, and climate datacubes, today are managed separately from the metadata, and with different, restricted retrieval capabilities. While databases are good at metadata modelled in tables, XML hierarchies, or RDF graphs, they traditionally do not support multi-dimensional arrays.This gap is being closed by Array Databases, pioneered by the scalable rasdaman ("raster data manager") array engine. Its declarative query language, rasql, extends SQL with array operators which are optimized and parallelized on server side. Installations can easily be mashed up securely, thereby enabling large-scale location-transparent query processing in federations. Domain experts value the integration with their commonly used tools leading to a quick learning curve.Earth, Space, and Life sciences, but also Social sciences as well as business have massive amounts of data and complex analysis challenges that are answered by rasdaman. As of today, rasdaman is mature and in operational use on hundreds of Terabytes of timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Additionally, its concepts have shaped international Big Data standards in the field, including the forthcoming array extension to ISO SQL, many of which are supported by both open-source and commercial systems meantime. In the geo field, rasdaman is reference implementation for the Open Geospatial Consortium (OGC) Big Data standard, WCS, now also under adoption by ISO. Further, rasdaman is in the final stage of OSGeo incubation.In this contribution we present array queries a la rasdaman, describe the architecture and novel optimization and parallelization techniques introduced in 2015, and put this in context of the intercontinental EarthServer initiative which utilizes rasdaman for enabling agile analytics on Petascale datacubes.

  20. A Heuristic Evaluation of the Generalized Intelligent Framework for Tutoring (GIFT) Authoring Tools

    DTIC Science & Technology

    2016-03-01

    Software Is Difficult to Locate and Download from the GIFT Website 5 2.1.2 Issue: Unclear Process for Starting GIFT Software Installation 7 2.1.3 Issue...and change information availability as the user’s expertise in using the authoring tools grows. • Aesthetic and Minimalist Design: Dialogues should...public release; distribution is unlimited. 7 2.1.2 Issue: Unclear Process for Starting GIFT Software Installation Users may not understand how to

Top