Sample records for high quality parallel

  1. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  2. Advanced fabrication of Si nanowire FET structures by means of a parallel approach.

    PubMed

    Li, J; Pud, S; Mayer, D; Vitusevich, S

    2014-07-11

    In this paper we present fabricated Si nanowires (NWs) of different dimensions with enhanced electrical characteristics. The parallel fabrication process is based on nanoimprint lithography using high-quality molds, which facilitates the realization of 50 nm-wide NW field-effect transistors (FETs). The imprint molds were fabricated by using a wet chemical anisotropic etching process. The wet chemical etch results in well-defined vertical sidewalls with edge roughness (3σ) as small as 2 nm, which is about four times better compared with the roughness usually obtained for reactive-ion etching molds. The quality of the mold was studied using atomic force microscopy and scanning electron microscopy image data. The use of the high-quality mold leads to almost 100% yield during fabrication of Si NW FETs as well as to an exceptional quality of the surfaces of the devices produced. To characterize the Si NW FETs, we used noise spectroscopy as a powerful method for evaluating device performance and the reliability of structures with nanoscale dimensions. The Hooge parameter of fabricated FET structures exhibits an average value of 1.6 × 10(-3). This value reflects the high quality of Si NW FETs fabricated by means of a parallel approach that uses a nanoimprint mold and cost-efficient technology.

  3. Renal magnetic resonance angiography at 3.0 Tesla using a 32-element phased-array coil system and parallel imaging in 2 directions.

    PubMed

    Fenchel, Michael; Nael, Kambiz; Deshpande, Vibhas S; Finn, J Paul; Kramer, Ulrich; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard

    2006-09-01

    The aim of the present study was to assess the feasibility of renal magnetic resonance angiography at 3.0 T using a phased-array coil system with 32-coil elements. Specifically, high parallel imaging factors were used for an increased spatial resolution and anatomic coverage of the whole abdomen. Signal-to-noise values and the g-factor distribution of the 32 element coil were examined in phantom studies for the magnetic resonance angiography (MRA) sequence. Eleven volunteers (6 men, median age of 30.0 years) were examined on a 3.0-T MR scanner (Magnetom Trio, Siemens Medical Solutions, Malvern, PA) using a 32-element phased-array coil (prototype from In vivo Corp.). Contrast-enhanced 3D-MRA (TR 2.95 milliseconds, TE 1.12 milliseconds, flip angle 25-30 degrees , bandwidth 650 Hz/pixel) was acquired with integrated generalized autocalibrating partially parallel acquisition (GRAPPA), in both phase- and slice-encoding direction. Images were assessed by 2 independent observers with regard to image quality, noise and presence of artifacts. Signal-to-noise levels of 22.2 +/- 22.0 and 57.9 +/- 49.0 were measured with (GRAPPAx6) and without parallel-imaging, respectively. The mean g-factor of the 32-element coil for GRAPPA with an acceleration of 3 and 2 in the phase-encoding and slice-encoding direction, respectively, was 1.61. High image quality was found in 9 of 11 volunteers (2.6 +/- 0.8) with good overall interobserver agreement (k = 0.87). Relatively low image quality with higher noise levels were encountered in 2 volunteers. MRA at 3.0 T using a 32-element phased-array coil is feasible in healthy volunteers. High diagnostic image quality and extended anatomic coverage could be achieved with application of high parallel imaging factors.

  4. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  5. Resolutions of the Coulomb operator: VIII. Parallel implementation using the modern programming language X10.

    PubMed

    Limpanuparb, Taweetham; Milthorpe, Josh; Rendell, Alistair P

    2014-10-30

    Use of the modern parallel programming language X10 for computing long-range Coulomb and exchange interactions is presented. By using X10, a partitioned global address space language with support for task parallelism and the explicit representation of data locality, the resolution of the Ewald operator can be parallelized in a straightforward manner including use of both intranode and internode parallelism. We evaluate four different schemes for dynamic load balancing of integral calculation using X10's work stealing runtime, and report performance results for long-range HF energy calculation of large molecule/high quality basis running on up to 1024 cores of a high performance cluster machine. Copyright © 2014 Wiley Periodicals, Inc.

  6. Vortex phase diagram of the layered superconductor Cu0.03TaS2 for H \\parallel c

    NASA Astrophysics Data System (ADS)

    Zhu, X. D.; Lu, J. C.; Sun, Y. P.; Pi, L.; Qu, Z.; Ling, L. S.; Yang, Z. R.; Zhang, Y. H.

    2010-12-01

    The magnetization and anisotropic electrical transport properties have been measured in high quality Cu0.03TaS2 single crystals. A pronounced peak effect has been observed, indicating that high quality and homogeneity are vital to the peak effect. A kink has been observed in the magnetic field, H, dependence of the in-plane resistivity ρab for H\\parallel c , which corresponds to a transition from activated to diffusive behavior of the vortex liquid phase. In the diffusive regime of the vortex liquid phase, the in-plane resistivity ρab is proportional to H0.3, which does not follow the Bardeen-Stephen law for free flux flow. Finally, a simplified vortex phase diagram of Cu0.03TaS2 for H \\parallel c is given.

  7. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  8. Vortex phase diagram of the layered superconductor Cu0.03TaS2 for H is parallel to c.

    PubMed

    Zhu, X D; Lu, J C; Sun, Y P; Pi, L; Qu, Z; Ling, L S; Yang, Z R; Zhang, Y H

    2010-12-22

    The magnetization and anisotropic electrical transport properties have been measured in high quality Cu(0.03)TaS(2) single crystals. A pronounced peak effect has been observed, indicating that high quality and homogeneity are vital to the peak effect. A kink has been observed in the magnetic field, H, dependence of the in-plane resistivity ρ(ab) for H is parallel to c, which corresponds to a transition from activated to diffusive behavior of the vortex liquid phase. In the diffusive regime of the vortex liquid phase, the in-plane resistivity ρ(ab) is proportional to H(0.3), which does not follow the Bardeen-Stephen law for free flux flow. Finally, a simplified vortex phase diagram of Cu(0.03)TaS(2) for H is parallel to c is given.

  9. Drug innovation, price controls, and parallel trade.

    PubMed

    Matteucci, Giorgio; Reverberi, Pierfrancesco

    2016-12-21

    We study the long-run welfare effects of parallel trade (PT) in pharmaceuticals. We develop a two-country model of PT with endogenous quality, where the pharmaceutical firm negotiates the price of the drug with the government in the foreign country. We show that, even though the foreign government does not consider global R&D costs, (the threat of) PT improves the quality of the drug as long as the foreign consumers' valuation of quality is high enough. We find that the firm's short-run profit may be higher when PT is allowed. Nonetheless, this is neither necessary nor sufficient for improving drug quality in the long run. We also show that improving drug quality is a sufficient condition for PT to increase global welfare. Finally, we show that, when PT is allowed, drug quality may be higher with than without price controls.

  10. HEVC real-time decoding

    NASA Astrophysics Data System (ADS)

    Bross, Benjamin; Alvarez-Mesa, Mauricio; George, Valeri; Chi, Chi Ching; Mayer, Tobias; Juurlink, Ben; Schierl, Thomas

    2013-09-01

    The new High Efficiency Video Coding Standard (HEVC) was finalized in January 2013. Compared to its predecessor H.264 / MPEG4-AVC, this new international standard is able to reduce the bitrate by 50% for the same subjective video quality. This paper investigates decoder optimizations that are needed to achieve HEVC real-time software decoding on a mobile processor. It is shown that HEVC real-time decoding up to high definition video is feasible using instruction extensions of the processor while decoding 4K ultra high definition video in real-time requires additional parallel processing. For parallel processing, a picture-level parallel approach has been chosen because it is generic and does not require bitstreams with special indication.

  11. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  12. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  13. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  14. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  15. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  16. Feasibility and its characteristics of CO2 laser micromachining-based PMMA anti-scattering grid estimated by MCNP code simulation.

    PubMed

    Bae, Jun Woo; Kim, Hee Reyoung

    2018-01-01

    Anti-scattering grid has been used to improve the image quality. However, applying a commonly used linear or parallel grid would cause image distortion, and focusing grid also requires a precise fabrication technology, which is expensive. To investigate and analyze whether using CO2 laser micromachining-based PMMA anti-scattering grid can improve the performance of the grid at a lower cost. Thus, improvement of grid performance would result in improvement of image quality. The cross-sectional shape of CO2 laser machined PMMA is similar to alphabet 'V'. The performance was characterized by contrast improvement factor (CIF) and Bucky. Four types of grid were tested, which include thin parallel, thick parallel, 'V'-type and 'inverse V'-type of grid. For a Bucky factor of 2.1, the CIF of the grid with both the "V" and inverse "V" had a value of 1.53, while the thick and thick parallel types had values of 1.43 and 1.65, respectively. The 'V' shape grid manufacture by CO2 laser micromachining showed higher CIF than parallel one, which had same shielding material channel width. It was thought that the 'V' shape grid would be replacement to the conventional parallel grid if it is hard to fabricate the high-aspect-ratio grid.

  17. Scalable Parallel Density-based Clustering and Applications

    NASA Astrophysics Data System (ADS)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  18. Mesh quality oriented 3D geometric vascular modeling based on parallel transport frame.

    PubMed

    Guo, Jixiang; Li, Shun; Chui, Yim Pan; Qin, Jing; Heng, Pheng Ann

    2013-08-01

    While a number of methods have been proposed to reconstruct geometrically and topologically accurate 3D vascular models from medical images, little attention has been paid to constantly maintain high mesh quality of these models during the reconstruction procedure, which is essential for many subsequent applications such as simulation-based surgical training and planning. We propose a set of methods to bridge this gap based on parallel transport frame. An improved bifurcation modeling method and two novel trifurcation modeling methods are developed based on 3D Bézier curve segments in order to ensure the continuous surface transition at furcations. In addition, a frame blending scheme is implemented to solve the twisting problem caused by frame mismatch of two successive furcations. A curvature based adaptive sampling scheme combined with a mesh quality guided frame tilting algorithm is developed to construct an evenly distributed, non-concave and self-intersection free surface mesh for vessels with distinct radius and high curvature. Extensive experiments demonstrate that our methodology can generate vascular models with better mesh quality than previous methods in terms of surface mesh quality criteria. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Design Method of Digital Optimal Control Scheme and Multiple Paralleled Bridge Type Current Amplifier for Generating Gradient Magnetic Fields in MRI Systems

    NASA Astrophysics Data System (ADS)

    Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo

    This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.

  20. Methods and systems for fabricating high quality superconducting tapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majkic, Goran; Selvamanickam, Venkat

    An MOCVD system fabricates high quality superconductor tapes with variable thicknesses. The MOCVD system can include a gas flow chamber between two parallel channels in a housing. A substrate tape is heated and then passed through the MOCVD housing such that the gas flow is perpendicular to the tape's surface. Precursors are injected into the gas flow for deposition on the substrate tape. In this way, superconductor tapes can be fabricated with variable thicknesses, uniform precursor deposition, and high critical current densities.

  1. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  2. Fast Whole-Engine Stirling Analysis

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako

    2006-01-01

    This presentation discusses the simulation approach to whole-engine for physical consistency, REV regenerator modeling, grid layering for smoothness, and quality, conjugate heat transfer method adjustment, high-speed low cost parallel cluster, and debugging.

  3. Xyce parallel electronic simulator design.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly beenmore » funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.« less

  4. An "artificial retina" processor for track reconstruction at the full LHC crossing rate

    NASA Astrophysics Data System (ADS)

    Abba, A.; Bedeschi, F.; Caponio, F.; Cenci, R.; Citterio, M.; Cusimano, A.; Fu, J.; Geraci, A.; Grizzuti, M.; Lusardi, N.; Marino, P.; Morello, M. J.; Neri, N.; Ninci, D.; Petruzzo, M.; Piucci, A.; Punzi, G.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.; Walsh, J.

    2016-07-01

    We present the latest results of an R&D study for a specialized processor capable of reconstructing, in a silicon pixel detector, high-quality tracks from high-energy collision events at 40 MHz. The processor applies a highly parallel pattern-recognition algorithm inspired to quick detection of edges in mammals visual cortex. After a detailed study of a real-detector application, demonstrating that online reconstruction of offline-quality tracks is feasible at 40 MHz with sub-microsecond latency, we are implementing a prototype using common high-bandwidth FPGA devices.

  5. An "artificial retina" processor for track reconstruction at the full LHC crossing rate

    DOE PAGES

    Abba, A.; F. Bedeschi; Caponio, F.; ...

    2015-10-23

    Here, we present the latest results of an R&D; study for a specialized processor capable of reconstructing, in a silicon pixel detector, high-quality tracks from high-energy collision events at 40 MHz. The processor applies a highly parallel pattern-recognition algorithm inspired to quick detection of edges in mammals visual cortex. After a detailed study of a real-detector application, demonstrating that online reconstruction of offline-quality tracks is feasible at 40 MHz with sub-microsecond latency, we are implementing a prototype using common high-bandwidth FPGA devices.

  6. Implementation of a Fully-Balanced Periodic Tridiagonal Solver on a Parallel Distributed Memory Architecture

    DTIC Science & Technology

    1994-05-01

    PARALLEL DISTRIBUTED MEMORY ARCHITECTURE LTJh T. M. Eidson 0 - 8 l 9 5 " G. Erlebacher _ _ _. _ DTIe QUALITY INSPECTED a Contract NAS I - 19480 May 1994...DISTRIBUTED MEMORY ARCHITECTURE T.M. Eidson * High Technology Corporation Hampton, VA 23665 G. Erlebachert Institute for Computer Applications in Science and...developed and evaluated. Simple model calculations as well as timing results are pres.nted to evaluate the various strategies. The particular

  7. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  8. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    NASA Astrophysics Data System (ADS)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  9. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

  10. The application of parallel wells to support the use of groundwater for sustainable irrigation

    NASA Astrophysics Data System (ADS)

    Suhardi

    2018-05-01

    The use of groundwater as a source of irrigation is one alternative in meeting water needs of plants. Using groundwater for irrigation requires a high cost because of the discharge that can be taken is limited. In addition, the use of large groundwater can cause environmental damage and social conflict. To minimize costs, maintain quality of the environment and to prevent social conflicts, it is necessary to innovate in the groundwater taking system. The study was conducted with an innovation of using parallel wells. Performance is measured by comparing parallel wells with a single well. The results showed that the use of parallel wells to meet the water needs of rice plants and increase the pump discharge up to 100%. In addition, parallel wells can reduce the influence radius of taking of groundwater compared to single well so as to prevent social conflict. Thus, the use of parallel wells can support the achievement of the use of groundwater for sustainable irrigation.

  11. Effects of dietary quality on basal metabolic rate and internal morphology of European starlings (Sturnus vulgaris)

    USGS Publications Warehouse

    Geluso, Keith; Hayes, J.P.

    1999-01-01

    European starlings (Sturnus vulgaris) were fed either a low- or high-quality diet to test the effects of dietary quality on basal metabolic rate (BMR) and internal morphology. Basal metabolic rate did not differ significantly between the two dietary groups, but internal morphology differed greatly. Starlings fed the low-quality diet had heavier gastrointestinal tracts, gizzards, and livers. Starlings fed the high-quality diet had heavier breast muscles. Starlings on the low-quality diet maintained mass, while starlings on the high-quality diet gained mass. Dry matter digestibility and energy digestibility were lower for starlings fed the low-quality diet, and their food and water intake were greater than starlings on the high-quality diet. The lack of dietary effect on BMR may be the result of increased energy expenditure of digestive organs paralleling a reduction of energy expenditure of organs and tissues not related to digestion (i.e., skeletal muscle). This trade-off in energy allocation among organs suggests a mechanism by which organisms may alter BMR in response to a change in seasonal variation in food availability.

  12. Simultaneous fluoroscopic and nuclear imaging: impact of collimator choice on nuclear image quality.

    PubMed

    van der Velden, Sandra; Beijst, Casper; Viergever, Max A; de Jong, Hugo W A M

    2017-01-01

    X-ray-guided oncological interventions could benefit from the availability of simultaneously acquired nuclear images during the procedure. To this end, a real-time, hybrid fluoroscopic and nuclear imaging device, consisting of an X-ray c-arm combined with gamma imaging capability, is currently being developed (Beijst C, Elschot M, Viergever MA, de Jong HW. Radiol. 2015;278:232-238). The setup comprises four gamma cameras placed adjacent to the X-ray tube. The four camera views are used to reconstruct an intermediate three-dimensional image, which is subsequently converted to a virtual nuclear projection image that overlaps with the X-ray image. The purpose of the present simulation study is to evaluate the impact of gamma camera collimator choice (parallel hole versus pinhole) on the quality of the virtual nuclear image. Simulation studies were performed with a digital image quality phantom including realistic noise and resolution effects, with a dynamic frame acquisition time of 1 s and a total activity of 150 MBq. Projections were simulated for 3, 5, and 7 mm pinholes and for three parallel hole collimators (low-energy all-purpose (LEAP), low-energy high-resolution (LEHR) and low-energy ultra-high-resolution (LEUHR)). Intermediate reconstruction was performed with maximum likelihood expectation-maximization (MLEM) with point spread function (PSF) modeling. In the virtual projection derived therefrom, contrast, noise level, and detectability were determined and compared with the ideal projection, that is, as if a gamma camera were located at the position of the X-ray detector. Furthermore, image deformations and spatial resolution were quantified. Additionally, simultaneous fluoroscopic and nuclear images of a sphere phantom were acquired with a physical prototype system and compared with the simulations. For small hot spots, contrast is comparable for all simulated collimators. Noise levels are, however, 3 to 8 times higher in pinhole geometries than in parallel hole geometries. This results in higher contrast-to-noise ratios for parallel hole geometries. Smaller spheres can thus be detected with parallel hole collimators than with pinhole collimators (17 mm vs 28 mm). Pinhole geometries show larger image deformations than parallel hole geometries. Spatial resolution varied between 1.25 cm for the 3 mm pinhole and 4 cm for the LEAP collimator. The simulation method was successfully validated by the experiments with the physical prototype. A real-time hybrid fluoroscopic and nuclear imaging device is currently being developed. Image quality of nuclear images obtained with different collimators was compared in terms of contrast, noise, and detectability. Parallel hole collimators showed lower noise and better detectability than pinhole collimators. © 2016 American Association of Physicists in Medicine.

  13. Use of parallel computing in mass processing of laser data

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  14. Oriented modulation for watermarking in direct binary search halftone images.

    PubMed

    Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der

    2012-09-01

    In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.

  15. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  16. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  17. Quantitative high-efficiency cadmium-zinc-telluride SPECT with dedicated parallel-hole collimation system in obese patients: results of a multi-center study.

    PubMed

    Nakazato, Ryo; Slomka, Piotr J; Fish, Mathews; Schwartz, Ronald G; Hayes, Sean W; Thomson, Louise E J; Friedman, John D; Lemley, Mark; Mackin, Maria L; Peterson, Benjamin; Schwartz, Arielle M; Doran, Jesse A; Germano, Guido; Berman, Daniel S

    2015-04-01

    Obesity is a common source of artifact on conventional SPECT myocardial perfusion imaging (MPI). We evaluated image quality and diagnostic performance of high-efficiency (HE) cadmium-zinc-telluride parallel-hole SPECT MPI for coronary artery disease (CAD) in obese patients. 118 consecutive obese patients at three centers (BMI 43.6 ± 8.9 kg·m(-2), range 35-79.7 kg·m(-2)) had upright/supine HE-SPECT and invasive coronary angiography > 6 months (n = 67) or low likelihood of CAD (n = 51). Stress quantitative total perfusion deficit (TPD) for upright (U-TPD), supine (S-TPD), and combined acquisitions (C-TPD) was assessed. Image quality (IQ; 5 = excellent; < 3 nondiagnostic) was compared among BMI 35-39.9 (n = 58), 40-44.9 (n = 24) and ≥45 (n = 36) groups. ROC curve area for CAD detection (≥50% stenosis) for U-TPD, S-TPD, and C-TPD were 0.80, 0.80, and 0.87, respectively. Sensitivity/specificity was 82%/57% for U-TPD, 74%/71% for S-TPD, and 80%/82% for C-TPD. C-TPD had highest specificity (P = .02). C-TPD normalcy rate was higher than U-TPD (88% vs 75%, P = .02). Mean IQ was similar among BMI 35-39.9, 40-44.9 and ≥45 groups [4.6 vs 4.4 vs 4.5, respectively (P = .6)]. No patient had a nondiagnostic stress scan. In obese patients, HE-SPECT MPI with dedicated parallel-hole collimation demonstrated high image quality, normalcy rate, and diagnostic accuracy for CAD by quantitative analysis of combined upright/supine acquisitions.

  18. Quantitative High-Efficiency Cadmium-Zinc-Telluride SPECT with Dedicated Parallel-Hole Collimation System in Obese Patients: Results of a Multi-Center Study

    PubMed Central

    Nakazato, Ryo; Slomka, Piotr J.; Fish, Mathews; Schwartz, Ronald G.; Hayes, Sean W.; Thomson, Louise E.J.; Friedman, John D.; Lemley, Mark; Mackin, Maria L.; Peterson, Benjamin; Schwartz, Arielle M.; Doran, Jesse A.; Germano, Guido; Berman, Daniel S.

    2014-01-01

    Background Obesity is a common source of artifact on conventional SPECT myocardial perfusion imaging (MPI). We evaluated image quality and diagnostic performance of high-efficiency (HE) cadmium-zinc-telluride (CZT) parallel-hole SPECT-MPI for coronary artery disease (CAD) in obese patients. Methods and Results 118 consecutive obese patients at 3 centers (BMI 43.6±8.9 kg/m2, range 35–79.7 kg/m2) had upright/supine HE-SPECT and ICA >6 months (n=67) or low-likelihood of CAD (n=51). Stress quantitative total perfusion deficit (TPD) for upright (U-TPD), supine (S-TPD) and combined acquisitions (C-TPD) was assessed. Image quality (IQ; 5=excellent; <3 nondiagnostic) was compared among BMI 35–39.9 (n=58), 40–44.9 (n=24) and ≥45 (n=36) groups. ROC-curve area for CAD detection (≥50% stenosis) for U-TPD, S-TPD, and C-TPD were 0.80, 0.80, and 0.87, respectively. Sensitivity/specificity was 82%/57% for U-TPD, 74%/71% for S-TPD, and 80%/82% for C-TPD. C-TPD had highest specificity (P=.02). C-TPD normalcy rate was higher than U-TPD (88% vs. 75%, P=.02). Mean IQ was similar among BMI 35–39.9, 40–44.9 and ≥45 groups [4.6 vs. 4.4 vs. 4.5, respectively (P=.6)]. No patient had a non-diagnostic stress scan. Conclusions In obese patients, HE-SPECT MPI with dedicated parallel-hole collimation demonstrated high image quality, normalcy rate, and diagnostic accuracy for CAD by quantitative analysis of combined upright/supine acquisitions. PMID:25388380

  19. Applications of ERTS-A Data Collection System (DCS) in the Arizona Regional Ecological Test Site (ARETS)

    NASA Technical Reports Server (NTRS)

    Schumann, H. H. (Principal Investigator)

    1972-01-01

    The author has identified the following significant results. Preliminary analysis of DCS data from the USGS Verde River stream flow measuring site indicates the DCS system is furnishing high quality data more frequently than had been expected. During the 43-day period between Nov. 3, and Dec. 15, 1972, 552 DCS transmissions were received during 193 data passes. The amount of data received far exceeded the single high quality transmission per 12-hour period expected from the DCS system. The digital-parallel ERTS-1 data has furnished sufficient to accurately compute mean daily gage heights. These in turn, are used to compute average daily streamflow rates during periods of stable or slowly changing flow conditions. The digital-parallel data has also furnished useful information during peak flow periods. However, the serial-digital DCS capability, currently under development for transmitting streamflow data, should provide data of greater utility for determining times of flood peaks.

  20. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    NASA Astrophysics Data System (ADS)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  1. High figure of merit ultra-compact 3-channel parallel-connected photonic crystal mini-hexagonal-H1 defect microcavity sensor array

    NASA Astrophysics Data System (ADS)

    Wang, Chunhong; Sun, Fujun; Fu, Zhongyuan; Ding, Zhaoxiang; Wang, Chao; Zhou, Jian; Wang, Jiawen; Tian, Huiping

    2017-08-01

    In this paper, a photonic crystal (PhC) butt-coupled mini-hexagonal-H1 defect (MHHD) microcavity sensor is proposed. The MHHD microcavity is designed by introducing six mini-holes into the initial H1 defect region. Further, based on a well-designed 1 ×3 PhC Beam Splitter and three optimal MHHD microcavity sensors with different lattice constants (a), a 3-channel parallel-connected PhC sensor array on monolithic silicon on insulator (SOI) is proposed. Finite-difference time-domain (FDTD) simulations method is performed to demonstrate the high performance of our structures. As statistics show, the quality factor (Q) of our optimal MHHD microcavity attains higher than 7×104, while the sensitivity (S) reaches up to 233 nm/RIU(RIU = refractive index unit). Thus, the figure of merit (FOM) >104 of the sensor is obtained, which is enhanced by two orders of magnitude compared to the previous butt-coupled sensors [1-4]. As for the 3-channel parallel-connected PhC MHHD microcavity sensor array, the FOMs of three independent MHHD microcavity sensors are 8071, 8250 and 8250, respectively. In addition, the total footprint of the proposed 3-channel parallel-connected PhC sensor array is ultra-compactness of 12.5 μm ×31 μm (width × length). Therefore, the proposed high FOM sensor array is an ideal platform for realizing ultra-compact highly parallel refractive index (RI) sensing.

  2. High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology

    NASA Astrophysics Data System (ADS)

    Rajan, K.; Patnaik, L. M.; Ramakrishna, J.

    1997-08-01

    Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.

  3. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor

    NASA Astrophysics Data System (ADS)

    Nagy, J.; Kelly, K.

    2013-09-01

    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  4. A parallel finite element procedure for contact-impact problems using edge-based smooth triangular element and GPU

    NASA Astrophysics Data System (ADS)

    Cai, Yong; Cui, Xiangyang; Li, Guangyao; Liu, Wenyang

    2018-04-01

    The edge-smooth finite element method (ES-FEM) can improve the computational accuracy of triangular shell elements and the mesh partition efficiency of complex models. In this paper, an approach is developed to perform explicit finite element simulations of contact-impact problems with a graphical processing unit (GPU) using a special edge-smooth triangular shell element based on ES-FEM. Of critical importance for this problem is achieving finer-grained parallelism to enable efficient data loading and to minimize communication between the device and host. Four kinds of parallel strategies are then developed to efficiently solve these ES-FEM based shell element formulas, and various optimization methods are adopted to ensure aligned memory access. Special focus is dedicated to developing an approach for the parallel construction of edge systems. A parallel hierarchy-territory contact-searching algorithm (HITA) and a parallel penalty function calculation method are embedded in this parallel explicit algorithm. Finally, the program flow is well designed, and a GPU-based simulation system is developed, using Nvidia's CUDA. Several numerical examples are presented to illustrate the high quality of the results obtained with the proposed methods. In addition, the GPU-based parallel computation is shown to significantly reduce the computing time.

  5. Why caution is recommended with post-hoc individual patient matching for estimation of treatment effect in parallel-group randomized controlled trials: the case of acute stroke trials.

    PubMed

    Jafari, Nahid; Hearne, John; Churilov, Leonid

    2013-11-10

    A post-hoc individual patient matching procedure was recently proposed within the context of parallel group randomized clinical trials (RCTs) as a method for estimating treatment effect. In this paper, we consider a post-hoc individual patient matching problem within a parallel group RCT as a multi-objective decision-making problem focussing on the trade-off between the quality of individual matches and the overall percentage of matching. Using acute stroke trials as a context, we utilize exact optimization and simulation techniques to investigate a complex relationship between the overall percentage of individual post-hoc matching, the size of the respective RCT, and the quality of matching on variables highly prognostic for a good functional outcome after stroke, as well as the dispersion in these variables. It is empirically confirmed that a high percentage of individual post-hoc matching can only be achieved when the differences in prognostic baseline variables between individually matched subjects within the same pair are sufficiently large and that the unmatched subjects are qualitatively different to the matched ones. It is concluded that the post-hoc individual matching as a technique for treatment effect estimation in parallel-group RCTs should be exercised with caution because of its propensity to introduce significant bias and reduce validity. If used with appropriate caution and thorough evaluation, this approach can complement other viable alternative approaches for estimating the treatment effect. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Towards a five-minute comprehensive cardiac MR examination using highly accelerated parallel imaging with a 32-element coil array: feasibility and initial comparative evaluation.

    PubMed

    Xu, Jian; Kim, Daniel; Otazo, Ricardo; Srichai, Monvadi B; Lim, Ruth P; Axel, Leon; Mcgorty, Kelly Anne; Niendorf, Thoralf; Sodickson, Daniel K

    2013-07-01

    To evaluate the feasibility and perform initial comparative evaluations of a 5-minute comprehensive whole-heart magnetic resonance imaging (MRI) protocol with four image acquisition types: perfusion (PERF), function (CINE), coronary artery imaging (CAI), and late gadolinium enhancement (LGE). This study protocol was Health Insurance Portability and Accountability Act (HIPAA)-compliant and Institutional Review Board-approved. A 5-minute comprehensive whole-heart MRI examination protocol (Accelerated) using 6-8-fold-accelerated volumetric parallel imaging was incorporated into and compared with a standard 2D clinical routine protocol (Standard). Following informed consent, 20 patients were imaged with both protocols. Datasets were reviewed for image quality using a 5-point Likert scale (0 = non-diagnostic, 4 = excellent) in blinded fashion by two readers. Good image quality with full whole-heart coverage was achieved using the accelerated protocol, particularly for CAI, although significant degradations in quality, as compared with traditional lengthy examinations, were observed for the other image types. Mean total scan time was significantly lower for the Accelerated as compared to Standard protocols (28.99 ± 4.59 min vs. 1.82 ± 0.05 min, P < 0.05). Overall image quality for the Standard vs. Accelerated protocol was 3.67 ± 0.29 vs. 1.5 ± 0.51 (P < 0.005) for PERF, 3.48 ± 0.64 vs. 2.6 ± 0.68 (P < 0.005) for CINE, 2.35 ± 1.01 vs. 2.48 ± 0.68 (P = 0.75) for CAI, and 3.67 ± 0.42 vs. 2.67 ± 0.84 (P < 0.005) for LGE. Diagnostic image quality for Standard vs. Accelerated protocols was 20/20 (100%) vs. 10/20 (50%) for PERF, 20/20 (100%) vs. 18/20 (90%) for CINE, 18/20 (90%) vs. 18/20 (90%) for CAI, and 20/20 (100%) vs. 18/20 (90%) for LGE. This study demonstrates the technical feasibility and promising image quality of 5-minute comprehensive whole-heart cardiac examinations, with simplified scan prescription and high spatial and temporal resolution enabled by highly parallel imaging technology. The study also highlights technical hurdles that remain to be addressed. Although image quality remained diagnostic for most scan types, the reduced image quality of PERF, CINE, and LGE scans in the Accelerated protocol remain a concern. Copyright © 2012 Wiley Periodicals, Inc.

  7. New machining method of high precision infrared window part

    NASA Astrophysics Data System (ADS)

    Yang, Haicheng; Su, Ying; Xu, Zengqi; Guo, Rui; Li, Wenting; Zhang, Feng; Liu, Xuanmin

    2016-10-01

    Most of the spherical shell of the photoelectric multifunctional instrument was designed as multi optical channel mode to adapt to the different band of the sensor, there were mainly TV, laser and infrared channels. Without affecting the optical diameter, wind resistance and pneumatic performance of the optical system, the overall layout of the spherical shell was optimized to save space and reduce weight. Most of the shape of the optical windows were special-shaped, each optical window directly participated in the high resolution imaging of the corresponding sensor system, and the optical axis parallelism of each sensor needed to meet the accuracy requirement of 0.05mrad.Therefore precision machining of optical window parts quality will directly affect the photoelectric system's pointing accuracy and interchangeability. Processing and testing of the TV and laser window had been very mature, while because of the special nature of the material, transparent and high refractive rate, infrared window parts had the problems of imaging quality and the control of the minimum focal length and second level parallel in the processing. Based on years of practical experience, this paper was focused on how to control the shape and parallel difference precision of infrared window parts in the processing. Single pass rate was increased from 40% to more than 95%, the processing efficiency was significantly enhanced, an effective solution to the bottleneck problem in the actual processing, which effectively solve the bottlenecks in research and production.

  8. Accelerating large-scale protein structure alignments with graphics processing units

    PubMed Central

    2012-01-01

    Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU. PMID:22357132

  9. Nonthermal plasma system for extending shelf life of raw broiler breast fillets

    USDA-ARS?s Scientific Manuscript database

    A nonthermal dielectric barrier discharge (DBD) plasma system was developed and enhanced to treat broiler breast fillets (BBF) in order to improve the microbial quality of the meat. The system consisted of a high-voltage source and two parallel, round-aluminum electrodes separated by three semi-rig...

  10. Unassigned MS/MS Spectra: Who Am I?

    PubMed

    Pathan, Mohashin; Samuel, Monisha; Keerthikumar, Shivakumar; Mathivanan, Suresh

    2017-01-01

    Recent advances in high resolution tandem mass spectrometry (MS) has resulted in the accumulation of high quality data. Paralleled with these advances in instrumentation, bioinformatics software have been developed to analyze such quality datasets. In spite of these advances, data analysis in mass spectrometry still remains critical for protein identification. In addition, the complexity of the generated MS/MS spectra, unpredictable nature of peptide fragmentation, sequence annotation errors, and posttranslational modifications has impeded the protein identification process. In a typical MS data analysis, about 60 % of the MS/MS spectra remains unassigned. While some of these could attribute to the low quality of the MS/MS spectra, a proportion can be classified as high quality. Further analysis may reveal how much of the unassigned MS spectra attribute to search space, sequence annotation errors, mutations, and/or posttranslational modifications. In this chapter, the tools used to identify proteins and ways to assign unassigned tandem MS spectra are discussed.

  11. LORAKS Makes Better SENSE: Phase-Constrained Partial Fourier SENSE Reconstruction without Phase Calibration

    PubMed Central

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.

    2016-01-01

    Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836

  12. Importance of methodology on (99m)technetium dimercapto-succinic acid scintigraphic image quality: imaging pilot study for RIVUR (Randomized Intervention for Children With Vesicoureteral Reflux) multicenter investigation.

    PubMed

    Ziessman, Harvey A; Majd, Massoud

    2009-07-01

    We reviewed our experience with (99m)technetium dimercapto-succinic acid scintigraphy obtained during an imaging pilot study for a multicenter investigation (Randomized Intervention for Children With Vesicoureteral Reflux) of the effectiveness of daily antimicrobial prophylaxis for preventing recurrent urinary tract infection and renal scarring. We analyzed imaging methodology and its relation to diagnostic image quality. (99m)Technetium dimercapto-succinic acid imaging guidelines were provided to participating sites. High-resolution planar imaging with parallel hole or pinhole collimation was required. Two core reviewers evaluated all submitted images. Analysis included appropriate views, presence or lack of patient motion, adequate magnification, sufficient counts and diagnostic image quality. Inter-reader agreement was evaluated. We evaluated 70, (99m)technetium dimercapto-succinic acid studies from 14 institutions. Variability was noted in methodology and image quality. Correlation (r value) between dose administered and patient age was 0.780. For parallel hole collimator imaging good correlation was noted between activity administered and counts (r = 0.800). For pinhole imaging the correlation was poor (r = 0.110). A total of 10 studies (17%) were rejected for quality issues of motion, kidney overlap, inadequate magnification, inadequate counts and poor quality images. The submitting institution was informed and provided with recommendations for improving quality, and resubmission of another study was required. Only 4 studies (6%) were judged differently by the 2 reviewers, and the differences were minor. Methodology and image quality for (99m)technetium dimercapto-succinic acid scintigraphy varied more than expected between institutions. The most common reason for poor image quality was inadequate count acquisition with insufficient attention to the tradeoff between administered dose, length of image acquisition, start time of imaging and resulting image quality. Inter-observer core reader agreement was high. The pilot study ensured good diagnostic quality standardized images for the Randomized Intervention for Children With Vesicoureteral Reflux investigation.

  13. A communication library for the parallelization of air quality models on structured grids

    NASA Astrophysics Data System (ADS)

    Miehe, Philipp; Sandu, Adrian; Carmichael, Gregory R.; Tang, Youhua; Dăescu, Dacian

    PAQMSG is an MPI-based, Fortran 90 communication library for the parallelization of air quality models (AQMs) on structured grids. It consists of distribution, gathering and repartitioning routines for different domain decompositions implementing a master-worker strategy. The library is architecture and application independent and includes optimization strategies for different architectures. This paper presents the library from a user perspective. Results are shown from the parallelization of STEM-III on Beowulf clusters. The PAQMSG library is available on the web. The communication routines are easy to use, and should allow for an immediate parallelization of existing AQMs. PAQMSG can also be used for constructing new models.

  14. Load Balancing Strategies for Multi-Block Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak; Lopez-Benitez, Noe; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The multi-block overset grid method is a powerful technique for high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process uses a grid system that discretizes the problem domain by using separately generated but overlapping structured grids that periodically update and exchange boundary information through interpolation. For efficient high performance computations of large-scale realistic applications using this methodology, the individual grids must be properly partitioned among the parallel processors. Overall performance, therefore, largely depends on the quality of load balancing. In this paper, we present three different load balancing strategies far overset grids and analyze their effects on the parallel efficiency of a Navier-Stokes CFD application running on an SGI Origin2000 machine.

  15. 3.0 Tesla high spatial resolution contrast-enhanced magnetic resonance angiography (CE-MRA) of the pulmonary circulation: initial experience with a 32-channel phased array coil using a high relaxivity contrast agent.

    PubMed

    Nael, Kambiz; Fenchel, Michael; Krishnam, Mayil; Finn, J Paul; Laub, Gerhard; Ruehm, Stefan G

    2007-06-01

    To evaluate the technical feasibility of high spatial resolution contrast-enhanced magnetic resonance angiography (CE-MRA) with highly accelerated parallel acquisition at 3.0 T using a 32-channel phased array coil, and a high relaxivity contrast agent. Ten adult healthy volunteers (5 men, 5 women, aged 21-66 years) underwent high spatial resolution CE-MRA of the pulmonary circulation. Imaging was performed at 3 T using a 32-channel phase array coil. After intravenous injection of 1 mL of gadobenate dimeglumine (Gd-BOPTA) at 1.5 mL/s, a timing bolus was used to measure the transit time from the arm vein to the main pulmonary artery. Subsequently following intravenous injection of 0.1 mmol/kg of Gd-BOPTA at the same rate, isotropic high spatial resolution data sets (1 x 1 x 1 mm3) CE-MRA of the entire pulmonary circulation were acquired using a fast gradient-recalled echo sequence (TR/TE 3/1.2 milliseconds, FA 18 degrees) and highly accelerated parallel acquisition (GRAPPA x 6) during a 20-second breath hold. The presence of artifact, noise, and image quality of the pulmonary arterial segments were evaluated independently by 2 radiologists. Phantom measurements were performed to assess the signal-to-noise ratio (SNR). Statistical analysis of data was performed by using Wilcoxon rank sum test and 2-sample Student t test. The interobserver variability was tested by kappa coefficient. All studies were of diagnostic quality as determined by both observers. The pulmonary arteries were routinely identified up to fifth-order branches, with definition in the diagnostic range and excellent interobserver agreement (kappa = 0.84, 95% confidence interval 0.77-0.90). Phantom measurements showed significantly lower SNR (P < 0.01) using GRAPPA (17.3 +/- 18.8) compared with measurements without parallel acquisition (58 +/- 49.4). The described 3 T CE-MRA protocol in addition to high T1 relaxivity of Gd-BOPTA provides sufficient SNR to support highly accelerated parallel acquisition (GRAPPA x 6), resulting in acquisition of isotopic (1 x 1 x 1 mm3) voxels over the entire pulmonary circulation in 20 seconds.

  16. GPU-completeness: theory and implications

    NASA Astrophysics Data System (ADS)

    Lin, I.-Jong

    2011-01-01

    This paper formalizes a major insight into a class of algorithms that relate parallelism and performance. The purpose of this paper is to define a class of algorithms that trades off parallelism for quality of result (e.g. visual quality, compression rate), and we propose a similar method for algorithmic classification based on NP-Completeness techniques, applied toward parallel acceleration. We will define this class of algorithm as "GPU-Complete" and will postulate the necessary properties of the algorithms for admission into this class. We will also formally relate his algorithmic space and imaging algorithms space. This concept is based upon our experience in the print production area where GPUs (Graphic Processing Units) have shown a substantial cost/performance advantage within the context of HPdelivered enterprise services and commercial printing infrastructure. While CPUs and GPUs are converging in their underlying hardware and functional blocks, their system behaviors are clearly distinct in many ways: memory system design, programming paradigms, and massively parallel SIMD architecture. There are applications that are clearly suited to each architecture: for CPU: language compilation, word processing, operating systems, and other applications that are highly sequential in nature; for GPU: video rendering, particle simulation, pixel color conversion, and other problems clearly amenable to massive parallelization. While GPUs establishing themselves as a second, distinct computing architecture from CPUs, their end-to-end system cost/performance advantage in certain parts of computation inform the structure of algorithms and their efficient parallel implementations. While GPUs are merely one type of architecture for parallelization, we show that their introduction into the design space of printing systems demonstrate the trade-offs against competing multi-core, FPGA, and ASIC architectures. While each architecture has its own optimal application, we believe that the selection of architecture can be defined in terms of properties of GPU-Completeness. For a welldefined subset of algorithms, GPU-Completeness is intended to connect the parallelism, algorithms and efficient architectures into a unified framework to show that multiple layers of parallel implementation are guided by the same underlying trade-off.

  17. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    PubMed

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.

  18. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    PubMed Central

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926

  19. National Centers for Environmental Prediction

    Science.gov Websites

    Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS

  20. Computational strategy for the solution of large strain nonlinear problems using the Wilkins explicit finite-difference approach

    NASA Technical Reports Server (NTRS)

    Hofmann, R.

    1980-01-01

    The STEALTH code system, which solves large strain, nonlinear continuum mechanics problems, was rigorously structured in both overall design and programming standards. The design is based on the theoretical elements of analysis while the programming standards attempt to establish a parallelism between physical theory, programming structure, and documentation. These features have made it easy to maintain, modify, and transport the codes. It has also guaranteed users a high level of quality control and quality assurance.

  1. Improved parallel image reconstruction using feature refinement.

    PubMed

    Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong

    2018-07-01

    The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Streamlined approach to high-quality purification and identification of compound series using high-resolution MS and NMR.

    PubMed

    Mühlebach, Anneke; Adam, Joachim; Schön, Uwe

    2011-11-01

    Automated medicinal chemistry (parallel chemistry) has become an integral part of the drug-discovery process in almost every large pharmaceutical company. Parallel array synthesis of individual organic compounds has been used extensively to generate diverse structural libraries to support different phases of the drug-discovery process, such as hit-to-lead, lead finding, or lead optimization. In order to guarantee effective project support, efficiency in the production of compound libraries has been maximized. As a consequence, also throughput in chromatographic purification and analysis has been adapted. As a recent trend, more laboratories are preparing smaller, yet more focused libraries with even increasing demands towards quality, i.e. optimal purity and unambiguous confirmation of identity. This paper presents an automated approach how to combine effective purification and structural conformation of a lead optimization library created by microwave-assisted organic synthesis. The results of complementary analytical techniques such as UHPLC-HRMS and NMR are not only regarded but even merged for fast and easy decision making, providing optimal quality of compound stock. In comparison with the previous procedures, throughput times are at least four times faster, while compound consumption could be decreased more than threefold. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Simultaneous digital quantification and fluorescence-based size characterization of massively parallel sequencing libraries.

    PubMed

    Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H

    2013-08-01

    Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.

  4. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  5. Limited angle tomographic breast imaging: A comparison of parallel beam and pinhole collimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wessell, D.E.; Kadrmas, D.J.; Frey, E.C.

    1996-12-31

    Results from clinical trials have suggested no improvement in lesion detection with parallel hole SPECT scintimammography (SM) with Tc-99m over parallel hole planar SM. In this initial investigation, we have elucidated some of the unique requirements of SPECT SM. With these requirements in mind, we have begun to develop practical data acquisition and reconstruction strategies that can reduce image artifacts and improve image quality. In this paper we investigate limited angle orbits for both parallel hole and pinhole SPECT SM. Singular Value Decomposition (SVD) is used to analyze the artifacts associated with the limited angle orbits. Maximum likelihood expectation maximizationmore » (MLEM) reconstructions are then used to examine the effects of attenuation compensation on the quality of the reconstructed image. All simulations are performed using the 3D-MCAT breast phantom. The results of these simulation studies demonstrate that limited angle SPECT SM is feasible, that attenuation correction is needed for accurate reconstructions, and that pinhole SPECT SM may have an advantage over parallel hole SPECT SM in terms of improved image quality and reduced image artifacts.« less

  6. High Spatiotemporal Resolution Dynamic Contrast-Enhanced MR Enterography in Crohn Disease Terminal Ileitis Using Continuous Golden-Angle Radial Sampling, Compressed Sensing, and Parallel Imaging.

    PubMed

    Ream, Justin M; Doshi, Ankur; Lala, Shailee V; Kim, Sooah; Rusinek, Henry; Chandarana, Hersh

    2015-06-01

    The purpose of this article was to assess the feasibility of golden-angle radial acquisition with compress sensing reconstruction (Golden-angle RAdial Sparse Parallel [GRASP]) for acquiring high temporal resolution data for pharmacokinetic modeling while maintaining high image quality in patients with Crohn disease terminal ileitis. Fourteen patients with biopsy-proven Crohn terminal ileitis were scanned using both contrast-enhanced GRASP and Cartesian breath-hold (volume-interpolated breath-hold examination [VIBE]) acquisitions. GRASP data were reconstructed with 2.4-second temporal resolution and fitted to the generalized kinetic model using an individualized arterial input function to derive the volume transfer coefficient (K(trans)) and interstitial volume (v(e)). Reconstructions, including data from the entire GRASP acquisition and Cartesian VIBE acquisitions, were rated for image quality, artifact, and detection of typical Crohn ileitis features. Inflamed loops of ileum had significantly higher K(trans) (3.36 ± 2.49 vs 0.86 ± 0.49 min(-1), p < 0.005) and v(e) (0.53 ± 0.15 vs 0.20 ± 0.11, p < 0.005) compared with normal bowel loops. There were no significant differences between GRASP and Cartesian VIBE for overall image quality (p = 0.180) or detection of Crohn ileitis features, although streak artifact was worse with the GRASP acquisition (p = 0.001). High temporal resolution data for pharmacokinetic modeling and high spatial resolution data for morphologic image analysis can be achieved in the same acquisition using GRASP.

  7. ISDN Application in the Army Environment

    DTIC Science & Technology

    1992-02-01

    Signalling System Number 7 ( SS7 ). SS7 is a packet switched signalling network operating in parallel with the traffic bearing network. The current, in...for example, require SS7 . Further into the future, broadband ISDN (B-ISDN) is expected to provide high-quality, full-motion video, High Definition...smaller business offices, ISDN could be a viable alternative to private networks, especially when switches are connected through SS7 . ISDN, in combination

  8. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    PubMed

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  9. Single Breath-Hold Non-Contrast Thoracic MRA Using Highly-Accelerated Parallel Imaging With a 32-element Coil Array

    PubMed Central

    Xu, Jian; Mcgorty, Kelly Anne; Lim, Ruth. P.; Bruno, Mary; Babb, James S.; Srichai, Monvadi B.; Kim, Daniel; Sodickson, Daniel K.

    2011-01-01

    OBJECTIVE To evaluate the feasibility of performing single breath-hold 3D thoracic non-contrast magnetic resonance angiography (NC-MRA) using highly-accelerated parallel imaging. MATERIALS AND METHODS We developed a single breath-hold NC MRA pulse sequence using balanced steady state free precession (SSFP) readout and highly-accelerated parallel imaging. In 17 subjects, highly-accelerated non-contrast MRA was compared against electrocardiogram (ECG)-triggered contrast-enhanced MRA. Anonymized images were randomized for blinded review by two independent readers for image quality, artifact severity in 8 defined vessel segments and aortic dimensions in 6 standard sites. NC-MRA and CE-MRA were compared in terms of these measures using paired sample t and Wilcoxon tests. RESULTS The overall image quality (3.21±0.68 for NC-MRA vs. 3.12±0.71 for CE-MRA) and artifact (2.87±1.01 for NC-MRA vs. 2.92±0.87 for CE-MRA) scores were not significantly different, but there were significant differences for the great vessel and coronary artery origins. NC-MRA demonstrated significantly lower aortic diameter measurements compared to CE-MRA; however, this difference was not considered clinically relevant (>3 mm difference) for less than 12% of segments, most commonly at the sinotubular junction. Mean total scan time was significantly lower for NC-MRA compared to CE-MRA (18.2 ± 6.0s vs. 28.1 ± 5.4s, respectively; p < 0.05). CONCLUSION Single breath-hold NC-MRA is feasible and can be a useful alternative for evaluation and follow-up of thoracic aortic diseases. PMID:22147589

  10. Update on Development of Mesh Generation Algorithms in MeshKit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay

    2015-09-30

    MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKitmore » are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.« less

  11. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  12. LORAKS makes better SENSE: Phase-constrained partial fourier SENSE reconstruction without phase calibration.

    PubMed

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P

    2017-03-01

    Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  13. Line-Focused Optical Excitation of Parallel Acoustic Focused Sample Streams for High Volumetric and Analytical Rate Flow Cytometry.

    PubMed

    Kalb, Daniel M; Fencl, Frank A; Woods, Travis A; Swanson, August; Maestas, Gian C; Juárez, Jaime J; Edwards, Bruce S; Shreve, Andrew P; Graves, Steven W

    2017-09-19

    Flow cytometry provides highly sensitive multiparameter analysis of cells and particles but has been largely limited to the use of a single focused sample stream. This limits the analytical rate to ∼50K particles/s and the volumetric rate to ∼250 μL/min. Despite the analytical prowess of flow cytometry, there are applications where these rates are insufficient, such as rare cell analysis in high cellular backgrounds (e.g., circulating tumor cells and fetal cells in maternal blood), detection of cells/particles in large dilute samples (e.g., water quality, urine analysis), or high-throughput screening applications. Here we report a highly parallel acoustic flow cytometer that uses an acoustic standing wave to focus particles into 16 parallel analysis points across a 2.3 mm wide optical flow cell. A line-focused laser and wide-field collection optics are used to excite and collect the fluorescence emission of these parallel streams onto a high-speed camera for analysis. With this instrument format and fluorescent microsphere standards, we obtain analysis rates of 100K/s and flow rates of 10 mL/min, while maintaining optical performance comparable to that of a commercial flow cytometer. The results with our initial prototype instrument demonstrate that the integration of key parallelizable components, including the line-focused laser, particle focusing using multinode acoustic standing waves, and a spatially arrayed detector, can increase analytical and volumetric throughputs by orders of magnitude in a compact, simple, and cost-effective platform. Such instruments will be of great value to applications in need of high-throughput yet sensitive flow cytometry analysis.

  14. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers

    PubMed Central

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation. PMID:28239346

  15. Parallel STEPS: Large Scale Stochastic Spatial Reaction-Diffusion Simulation with High Performance Computers.

    PubMed

    Chen, Weiliang; De Schutter, Erik

    2017-01-01

    Stochastic, spatial reaction-diffusion simulations have been widely used in systems biology and computational neuroscience. However, the increasing scale and complexity of models and morphologies have exceeded the capacity of any serial implementation. This led to the development of parallel solutions that benefit from the boost in performance of modern supercomputers. In this paper, we describe an MPI-based, parallel operator-splitting implementation for stochastic spatial reaction-diffusion simulations with irregular tetrahedral meshes. The performance of our implementation is first examined and analyzed with simulations of a simple model. We then demonstrate its application to real-world research by simulating the reaction-diffusion components of a published calcium burst model in both Purkinje neuron sub-branch and full dendrite morphologies. Simulation results indicate that our implementation is capable of achieving super-linear speedup for balanced loading simulations with reasonable molecule density and mesh quality. In the best scenario, a parallel simulation with 2,000 processes runs more than 3,600 times faster than its serial SSA counterpart, and achieves more than 20-fold speedup relative to parallel simulation with 100 processes. In a more realistic scenario with dynamic calcium influx and data recording, the parallel simulation with 1,000 processes and no load balancing is still 500 times faster than the conventional serial SSA simulation.

  16. Narrow-band far-infrared interference filters with high-T c, superconducting reflectors

    NASA Astrophysics Data System (ADS)

    Schönberger, R.; Prückl, A.; Pechen, E. V.; Anzin, V. B.; Brunner, B.; Renk, K. F.

    1994-10-01

    We report on experiments showing that high-T c, superconductors are well suitable for constructing of high-quality far-infrared Fabry-Perot interference filters in the terahertz frequency range. In an interference filter we use two plane-parallel MgO plates with YBa 2 Cu 3 O 7 thin films as partly transparent reflectors on adjacent surfaces. For the first-order main resonances adjusted to frequencies around 2 THz a quality factor of ≅200 and a peak-transmissivity of 0˜.5 have been reached. Study of the filters with YBa 2 Cu 3 O 7 films of different thickness indicate the possibility of reaching still higher selectivity. An analysis of the filter characteristics delivered the dynamical conductivity of the high-T c films.

  17. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy improvement, the KNL platform was 37.5 % more efficient on power consumption compared with the CPU platform. The optimisations also enabled much further parallel scalability on both the CPU cluster and the KNL cluster scaled to 40 CPU nodes and 30 KNL nodes, with a parallel efficiency of 70.4 and 42.2 %, respectively.

  18. dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia

    DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less

  19. GPURFSCREEN: a GPU based virtual screening tool using random forest classifier.

    PubMed

    Jayaraj, P B; Ajay, Mathias K; Nufail, M; Gopakumar, G; Jaleel, U C A

    2016-01-01

    In-silico methods are an integral part of modern drug discovery paradigm. Virtual screening, an in-silico method, is used to refine data models and reduce the chemical space on which wet lab experiments need to be performed. Virtual screening of a ligand data model requires large scale computations, making it a highly time consuming task. This process can be speeded up by implementing parallelized algorithms on a Graphical Processing Unit (GPU). Random Forest is a robust classification algorithm that can be employed in the virtual screening. A ligand based virtual screening tool (GPURFSCREEN) that uses random forests on GPU systems has been proposed and evaluated in this paper. This tool produces optimized results at a lower execution time for large bioassay data sets. The quality of results produced by our tool on GPU is same as that on a regular serial environment. Considering the magnitude of data to be screened, the parallelized virtual screening has a significantly lower running time at high throughput. The proposed parallel tool outperforms its serial counterpart by successfully screening billions of molecules in training and prediction phases.

  20. dfnWorks: A discrete fracture network framework for modeling subsurface flow and transport

    DOE PAGES

    Hyman, Jeffrey D.; Karra, Satish; Makedonska, Nataliia; ...

    2015-11-01

    DFNWORKS is a parallelized computational suite to generate three-dimensional discrete fracture networks (DFN) and simulate flow and transport. Developed at Los Alamos National Laboratory over the past five years, it has been used to study flow and transport in fractured media at scales ranging from millimeters to kilometers. The networks are created and meshed using DFNGEN, which combines FRAM (the feature rejection algorithm for meshing) methodology to stochastically generate three-dimensional DFNs with the LaGriT meshing toolbox to create a high-quality computational mesh representation. The representation produces a conforming Delaunay triangulation suitable for high performance computing finite volume solvers in anmore » intrinsically parallel fashion. Flow through the network is simulated in dfnFlow, which utilizes the massively parallel subsurface flow and reactive transport finite volume code PFLOTRAN. A Lagrangian approach to simulating transport through the DFN is adopted within DFNTRANS to determine pathlines and solute transport through the DFN. Example applications of this suite in the areas of nuclear waste repository science, hydraulic fracturing and CO 2 sequestration are also included.« less

  1. Urban residential greenspace and mental health in youth: Different approaches to testing multiple pathways yield different conclusions.

    PubMed

    Dzhambov, Angel; Hartig, Terry; Markevych, Iana; Tilov, Boris; Dimitrova, Donka

    2018-01-01

    Urban greenspace can benefit mental health through multiple mechanisms. They may work together, but previous studies have treated them as independent. We aimed to compare single and parallel mediation models, which estimate the independent contributions of different paths, to several models that posit serial mediation components in the pathway from greenspace to mental health. We collected cross-sectional survey data from 399 participants (15-25 years of age) in the city of Plovdiv, Bulgaria. Objective "exposure" to urban residential greenspace was defined by the Normalized Difference Vegetation Index (NDVI), Soil Adjusted Vegetation Index, tree cover density within the 500-m buffer, and Euclidean distance to the nearest urban greenspace. Self-reported measures of availability, access, quality, and usage of greenspace were also used. Mental health was measured with the General Health Questionnaire. The following potential mediators were considered in single and parallel mediation models: restorative quality of the neighborhood, neighborhood social cohesion, commuting and leisure time physical activity, road traffic noise annoyance, and perceived air pollution. Four models were tested with the following serial mediation components: (1) restorative quality → social cohesion; (2) restorative quality → physical activity; (3) perceived traffic pollution → restorative quality; (4) and noise annoyance → physical activity. There was no direct association between objectively-measured greenspace and mental health. For the 500-m buffer, the tests of the single mediator models suggested that restorative quality mediated the relationship between NDVI and mental health. Tests of parallel mediation models did not find any significant indirect effects. In line with theory, tests of the serial mediation models showed that higher restorative quality was associated with more physical activity and more social cohesion, and in turn with better mental health. As for self-reported greenspace measures, single mediation through restorative quality was significant only for time in greenspace, and there was no mediation though restorative quality in the parallel mediation models; however, serial mediation through restorative quality and social cohesion/physical activity was indicated for all self-reported measures except for greenspace quality. Statistical models should adequately address the theoretically indicated interdependencies between mechanisms underlying association between greenspace and mental health. If such causal relationships hold, testing mediators alone or in parallel may lead to incorrect inferences about the relative contribution of specific paths, and thus to inappropriate intervention strategies. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Comparison of the IAEA TRS-398 and AAPM TG-51 absorbed dose to water protocols in the dosimetry of high-energy photon and electron beams

    NASA Astrophysics Data System (ADS)

    Saiful Huq, M.; Andreo, Pedro; Song, Haijun

    2001-11-01

    The International Atomic Energy Agency (IAEA TRS-398) and the American Association of Physicists in Medicine (AAPM TG-51) have published new protocols for the calibration of radiotherapy beams. These protocols are based on the use of an ionization chamber calibrated in terms of absorbed dose to water in a standards laboratory's reference quality beam. This paper compares the recommendations of the two protocols in two ways: (i) by analysing in detail the differences in the basic data included in the two protocols for photon and electron beam dosimetry and (ii) by performing measurements in clinical photon and electron beams and determining the absorbed dose to water following the recommendations of the two protocols. Measurements were made with two Farmer-type ionization chambers and three plane-parallel ionization chamber types in 6, 18 and 25 MV photon beams and 6, 8, 10, 12, 15 and 18 MeV electron beams. The Farmer-type chambers used were NE 2571 and PTW 30001, and the plane-parallel chambers were a Scanditronix-Wellhöfer NACP and Roos, and a PTW Markus chamber. For photon beams, the measured ratios TG-51/TRS-398 of absorbed dose to water Dw ranged between 0.997 and 1.001, with a mean value of 0.999. The ratios for the beam quality correction factors kQ were found to agree to within about +/-0.2% despite significant differences in the method of beam quality specification for photon beams and in the basic data entering into kQ. For electron beams, dose measurements were made using direct ND,w calibrations of cylindrical and plane-parallel chambers in a 60Co gamma-ray beam, as well as cross-calibrations of plane-parallel chambers in a high-energy electron beam. For the direct ND,w calibrations the ratios TG-51/TRS-398 of absorbed dose to water Dw were found to lie between 0.994 and 1.018 depending upon the chamber and electron beam energy used, with mean values of 0.996, 1.006, and 1.017, respectively, for the cylindrical, well-guarded and not well-guarded plane-parallel chambers. The Dw ratios measured for the cross-calibration procedures varied between 0.993 and 0.997. The largest discrepancies for electron beams between the two protocols arise from the use of different data for the perturbation correction factors pwall and pdis of cylindrical and plane-parallel chambers, all in 60Co. A detailed analysis of the reasons for the discrepancies is made which includes comparing the formalisms, correction factors and the quantities in the two protocols.

  3. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  4. An Analysis of Performance Enhancement Techniques for Overset Grid Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, J. J.; Biswas, R.; Potsdam, M.; Strawn, R. C.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The overset grid methodology has significantly reduced time-to-solution of high-fidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement techniques on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machine. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  5. WE-G-18A-04: 3D Dictionary Learning Based Statistical Iterative Reconstruction for Low-Dose Cone Beam CT Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H

    2014-06-15

    Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less

  6. High power parallel ultrashort pulse laser processing

    NASA Astrophysics Data System (ADS)

    Gillner, Arnold; Gretzki, Patrick; Büsing, Lasse

    2016-03-01

    The class of ultra-short-pulse (USP) laser sources are used, whenever high precession and high quality material processing is demanded. These laser sources deliver pulse duration in the range of ps to fs and are characterized with high peak intensities leading to a direct vaporization of the material with a minimum thermal damage. With the availability of industrial laser source with an average power of up to 1000W, the main challenge consist of the effective energy distribution and disposition. Using lasers with high repetition rates in the MHz region can cause thermal issues like overheating, melt production and low ablation quality. In this paper, we will discuss different approaches for multibeam processing for utilization of high pulse energies. The combination of diffractive optics and conventional galvometer scanner can be used for high throughput laser ablation, but are limited in the optical qualities. We will show which applications can benefit from this hybrid optic and which improvements in productivity are expected. In addition, the optical limitations of the system will be compiled, in order to evaluate the suitability of this approach for any given application.

  7. Preparation of Protein Samples for NMR Structure, Function, and Small Molecule Screening Studies

    PubMed Central

    Acton, Thomas B.; Xiao, Rong; Anderson, Stephen; Aramini, James; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Kornhaber, Gregory; Lau, Jessica; Lee, Dong Yup; Liu, Gaohua; Maglaqui, Melissa; Ma, Lichung; Mao, Lei; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Shastry, Ritu; Swapna, G.V.T.; Tang, Yeufeng; Tong, Saichiu; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.

    2014-01-01

    In this chapter, we concentrate on the production of high quality protein samples for NMR studies. In particular, we provide an in-depth description of recent advances in the production of NMR samples and their synergistic use with recent advancements in NMR hardware. We describe the protein production platform of the Northeast Structural Genomics Consortium, and outline our high-throughput strategies for producing high quality protein samples for nuclear magnetic resonance (NMR) studies. Our strategy is based on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems and isotope enrichment in minimal media. We describe 96-well ligation-independent cloning and analytical expression systems, parallel preparative scale fermentation, and high-throughput purification protocols. The 6X-His affinity tag allows for a similar two-step purification procedure implemented in a parallel high-throughput fashion that routinely results in purity levels sufficient for NMR studies (> 97% homogeneity). Using this platform, the protein open reading frames of over 17,500 different targeted proteins (or domains) have been cloned as over 28,000 constructs. Nearly 5,000 of these proteins have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html), resulting in more than 950 new protein structures, including more than 400 NMR structures, deposited in the Protein Data Bank. The Northeast Structural Genomics Consortium pipeline has been effective in producing protein samples of both prokaryotic and eukaryotic origin. Although this paper describes our entire pipeline for producing isotope-enriched protein samples, it focuses on the major updates introduced during the last 5 years (Phase 2 of the National Institute of General Medical Sciences Protein Structure Initiative). Our advanced automated and/or parallel cloning, expression, purification, and biophysical screening technologies are suitable for implementation in a large individual laboratory or by a small group of collaborating investigators for structural biology, functional proteomics, ligand screening and structural genomics research. PMID:21371586

  8. Modelling for Ship Design and Production

    DTIC Science & Technology

    1991-09-01

    the physical production process. The product has to be delivered within the chain of order processing . The process “ship production” is defined by the...environment is of increasing importance. Changing product types, complexity and parallelism of order processing , short throughput times and fixed due...specialized and high quality products under manu- facturing conditions which ensure economic and effective order processing . Mapping these main

  9. 77 FR 47573 - Approval and Promulgation of Implementation Plans; Mississippi; 110(a)(2)(E)(ii) Infrastructure...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... Mississippi Department of Environmental Quality (MDEQ), on July 13, 2012, for parallel processing. This... of Contents I. What is parallel processing? II. Background III. What elements are required under... Executive Order Reviews I. What is parallel processing? Consistent with EPA regulations found at 40 CFR Part...

  10. Low heat transfer, high strength window materials

    DOEpatents

    Berlad, Abraham L.; Salzano, Francis J.; Batey, John E.

    1978-01-01

    A multi-pane window with improved insulating qualities; comprising a plurality of transparent or translucent panes held in an essentially parallel, spaced-apart relationship by a frame. Between at least one pair of panes is a convection defeating means comprising an array of parallel slats or cells so designed as to prevent convection currents from developing in the space between the two panes. The convection defeating structures may have reflective surfaces so as to improve the collection and transmittance of the incident radiant energy. These same means may be used to control (increase or decrease) the transmittance of solar energy as well as to decouple the radiative transfer between the interior surfaces of the transparent panes.

  11. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    NASA Astrophysics Data System (ADS)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  12. Scalable Static and Dynamic Community Detection Using Grappolo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halappanavar, Mahantesh; Lu, Hao; Kalyanaraman, Anantharaman

    Graph clustering, popularly known as community detection, is a fundamental kernel for several applications of relevance to the Defense Advanced Research Projects Agency’s (DARPA) Hierarchical Identify Verify Exploit (HIVE) Pro- gram. Clusters or communities represent natural divisions within a network that are densely connected within a cluster and sparsely connected to the rest of the network. The need to compute clustering on large scale data necessitates the development of efficient algorithms that can exploit modern architectures that are fundamentally parallel in nature. How- ever, due to their irregular and inherently sequential nature, many of the current algorithms for community detectionmore » are challenging to parallelize. In response to the HIVE Graph Challenge, we present several parallelization heuristics for fast community detection using the Louvain method as the serial template. We implement all the heuristics in a software library called Grappolo. Using the inputs from the HIVE Challenge, we demonstrate superior performance and high quality solutions based on four parallelization heuristics. We use Grappolo on static graphs as the first step towards community detection on streaming graphs.« less

  13. Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.

  14. Parallel transmission RF pulse design for eddy current correction at ultra high field.

    PubMed

    Zheng, Hai; Zhao, Tiejun; Qian, Yongxian; Ibrahim, Tamer; Boada, Fernando

    2012-08-01

    Multidimensional spatially selective RF pulses have been used in MRI applications such as B₁ and B₀ inhomogeneities mitigation. However, the long pulse duration has limited their practical applications. Recently, theoretical and experimental studies have shown that parallel transmission can effectively shorten pulse duration without sacrificing the quality of the excitation pattern. Nonetheless, parallel transmission with accelerated pulses can be severely impeded by hardware and/or system imperfections. One of such imperfections is the effect of the eddy current field. In this paper, we first show the effects of the eddy current field on the excitation pattern and then report an RF pulse the design method to correct eddy current fields caused by the RF coil and the gradient system. Experimental results on a 7 T human eight-channel parallel transmit system show substantial improvements on excitation patterns with the use of eddy current correction. Moreover, the proposed model-based correction method not only demonstrates comparable excitation patterns as the trajectory measurement method, but also significantly improves time efficiency. Copyright © 2012. Published by Elsevier Inc.

  15. Embedded Implementation of VHR Satellite Image Segmentation

    PubMed Central

    Li, Chao; Balla-Arabé, Souleymane; Ginhac, Dominique; Yang, Fan

    2016-01-01

    Processing and analysis of Very High Resolution (VHR) satellite images provide a mass of crucial information, which can be used for urban planning, security issues or environmental monitoring. However, they are computationally expensive and, thus, time consuming, while some of the applications, such as natural disaster monitoring and prevention, require high efficiency performance. Fortunately, parallel computing techniques and embedded systems have made great progress in recent years, and a series of massively parallel image processing devices, such as digital signal processors or Field Programmable Gate Arrays (FPGAs), have been made available to engineers at a very convenient price and demonstrate significant advantages in terms of running-cost, embeddability, power consumption flexibility, etc. In this work, we designed a texture region segmentation method for very high resolution satellite images by using the level set algorithm and the multi-kernel theory in a high-abstraction C environment and realize its register-transfer level implementation with the help of a new proposed high-level synthesis-based design flow. The evaluation experiments demonstrate that the proposed design can produce high quality image segmentation with a significant running-cost advantage. PMID:27240370

  16. Parallel design patterns for a low-power, software-defined compressed video encoder

    NASA Astrophysics Data System (ADS)

    Bruns, Michael W.; Hunt, Martin A.; Prasad, Durga; Gunupudi, Nageswara R.; Sonachalam, Sekar

    2011-06-01

    Video compression algorithms such as H.264 offer much potential for parallel processing that is not always exploited by the technology of a particular implementation. Consumer mobile encoding devices often achieve real-time performance and low power consumption through parallel processing in Application Specific Integrated Circuit (ASIC) technology, but many other applications require a software-defined encoder. High quality compression features needed for some applications such as 10-bit sample depth or 4:2:2 chroma format often go beyond the capability of a typical consumer electronics device. An application may also need to efficiently combine compression with other functions such as noise reduction, image stabilization, real time clocks, GPS data, mission/ESD/user data or software-defined radio in a low power, field upgradable implementation. Low power, software-defined encoders may be implemented using a massively parallel memory-network processor array with 100 or more cores and distributed memory. The large number of processor elements allow the silicon device to operate more efficiently than conventional DSP or CPU technology. A dataflow programming methodology may be used to express all of the encoding processes including motion compensation, transform and quantization, and entropy coding. This is a declarative programming model in which the parallelism of the compression algorithm is expressed as a hierarchical graph of tasks with message communication. Data parallel and task parallel design patterns are supported without the need for explicit global synchronization control. An example is described of an H.264 encoder developed for a commercially available, massively parallel memorynetwork processor device.

  17. High-precision laser microcutting and laser microdrilling using diffractive beam-splitting and high-precision flexible beam alignment

    NASA Astrophysics Data System (ADS)

    Zibner, F.; Fornaroli, C.; Holtkamp, J.; Shachaf, Lior; Kaplan, Natan; Gillner, A.

    2017-08-01

    High-precision laser micro machining gains more importance in industrial applications every month. Optical systems like the helical optics offer highest quality together with controllable and adjustable drilling geometry, thus as taper angle, aspect ratio and heat effected zone. The helical optics is based on a rotating Dove-prism which is mounted in a hollow shaft engine together with other optical elements like wedge prisms and plane plates. Although the achieved quality can be interpreted as extremely high the low process efficiency is a main reason that this manufacturing technology has only limited demand within the industrial market. The objective of the research studies presented in this paper is to dramatically increase process efficiency as well as process flexibility. During the last years, the average power of commercial ultra-short pulsed laser sources has increased significantly. The efficient utilization of the high average laser power in the field of material processing requires an effective distribution of the laser power onto the work piece. One approach to increase the efficiency is the application of beam splitting devices to enable parallel processing. Multi beam processing is used to parallelize the fabrication of periodic structures as most application only require a partial amount of the emitted ultra-short pulsed laser power. In order to achieve highest flexibility while using multi beam processing the single beams are diverted and re-guided in a way that enables the opportunity to process with each partial beam on locally apart probes or semimanufactures.

  18. Beam quality corrections for parallel-plate ion chambers in electron reference dosimetry

    NASA Astrophysics Data System (ADS)

    Zink, K.; Wulff, J.

    2012-04-01

    Current dosimetry protocols (AAPM, IAEA, IPEM, DIN) recommend parallel-plate ionization chambers for dose measurements in clinical electron beams. This study presents detailed Monte Carlo simulations of beam quality correction factors for four different types of parallel-plate chambers: NACP-02, Markus, Advanced Markus and Roos. These chambers differ in constructive details which should have notable impact on the resulting perturbation corrections, hence on the beam quality corrections. The results reveal deviations to the recommended beam quality corrections given in the IAEA TRS-398 protocol in the range of 0%-2% depending on energy and chamber type. For well-guarded chambers, these deviations could be traced back to a non-unity and energy-dependent wall perturbation correction. In the case of the guardless Markus chamber, a nearly energy-independent beam quality correction is resulting as the effects of wall and cavity perturbation compensate each other. For this chamber, the deviations to the recommended values are the largest and may exceed 2%. From calculations of type-B uncertainties including effects due to uncertainties of the underlying cross-sectional data as well as uncertainties due to the chamber material composition and chamber geometry, the overall uncertainty of calculated beam quality correction factors was estimated to be <0.7%. Due to different chamber positioning recommendations given in the national and international dosimetry protocols, an additional uncertainty in the range of 0.2%-0.6% is present. According to the IAEA TRS-398 protocol, the uncertainty in clinical electron dosimetry using parallel-plate ion chambers is 1.7%. This study may help to reduce this uncertainty significantly.

  19. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    PubMed

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  20. Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes

    DOE PAGES

    Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...

    2016-08-18

    In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less

  1. Parallel processing in the honeybee olfactory pathway: structure, function, and evolution.

    PubMed

    Rössler, Wolfgang; Brill, Martin F

    2013-11-01

    Animals face highly complex and dynamic olfactory stimuli in their natural environments, which require fast and reliable olfactory processing. Parallel processing is a common principle of sensory systems supporting this task, for example in visual and auditory systems, but its role in olfaction remained unclear. Studies in the honeybee focused on a dual olfactory pathway. Two sets of projection neurons connect glomeruli in two antennal-lobe hemilobes via lateral and medial tracts in opposite sequence with the mushroom bodies and lateral horn. Comparative studies suggest that this dual-tract circuit represents a unique adaptation in Hymenoptera. Imaging studies indicate that glomeruli in both hemilobes receive redundant sensory input. Recent simultaneous multi-unit recordings from projection neurons of both tracts revealed widely overlapping response profiles strongly indicating parallel olfactory processing. Whereas lateral-tract neurons respond fast with broad (generalistic) profiles, medial-tract neurons are odorant specific and respond slower. In analogy to "what-" and "where" subsystems in visual pathways, this suggests two parallel olfactory subsystems providing "what-" (quality) and "when" (temporal) information. Temporal response properties may support across-tract coincidence coding in higher centers. Parallel olfactory processing likely enhances perception of complex odorant mixtures to decode the diverse and dynamic olfactory world of a social insect.

  2. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    PubMed

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. High-rate serial interconnections for embedded and distributed systems with power and resource constraints

    NASA Astrophysics Data System (ADS)

    Sheynin, Yuriy; Shutenko, Felix; Suvorova, Elena; Yablokov, Evgenej

    2008-04-01

    High rate interconnections are important subsystems in modern data processing and control systems of many classes. They are especially important in prospective embedded and on-board systems that used to be multicomponent systems with parallel or distributed architecture, [1]. Modular architecture systems of previous generations were based on parallel busses that were widely used and standardised: VME, PCI, CompactPCI, etc. Busses evolution went in improvement of bus protocol efficiency (burst transactions, split transactions, etc.) and increasing operation frequencies. However, due to multi-drop bus nature and multi-wire skew problems the parallel bussing speedup became more and more limited. For embedded and on-board systems additional reason for this trend was in weight, size and power constraints of an interconnection and its components. Parallel interfaces have become technologically more challenging as their respective clock frequencies have increased to keep pace with the bandwidth requirements of their attached storage devices. Since each interface uses a data clock to gate and validate the parallel data (which is normally 8 bits or 16 bits wide), the clock frequency need only be equivalent to the byte rate or word rate being transmitted. In other words, for a given transmission frequency, the wider the data bus, the slower the clock. As the clock frequency increases, more high frequency energy is available in each of the data lines, and a portion of this energy is dissipated in radiation. Each data line not only transmits this energy but also receives some from its neighbours. This form of mutual interference is commonly called "cross-talk," and the signal distortion it produces can become another major contributor to loss of data integrity unless compensated by appropriate cable designs. Other transmission problems such as frequency-dependent attenuation and signal reflections, while also applicable to serial interfaces, are more troublesome in parallel interfaces due to the number of additional cable conductors involved. In order to compensate for these drawbacks, higher quality cables, shorter cable runs and fewer devices on the bus have been the norm. Finally, the physical bulk of the parallel cables makes them more difficult to route inside an enclosure, hinders cooling airflow and is incompatible with the trend toward smaller form-factor devices. Parallel busses worked in systems during the past 20 years, but the accumulated problems dictate the need for change and the technology is available to spur the transition. The general trend in high-rate interconnections turned from parallel bussing to scalable interconnections with a network architecture and high-rate point-to-point links. Analysis showed that data links with serial information transfer could achieve higher throughput and efficiency and it was confirmed in various research and practical design. Serial interfaces offer an improvement over older parallel interfaces: better performance, better scalability, and also better reliability as the parallel interfaces are at their limits of speed with reliable data transfers and others. The trend was implemented in major standards' families evolution: e.g. from PCI/PCI-X parallel bussing to PCIExpress interconnection architecture with serial lines, from CompactPCI parallel bus to ATCA (Advanced Telecommunications Architecture) specification with serial links and network topologies of an interconnection, etc. In the article we consider a general set of characteristics and features of serial interconnections, give a brief overview of serial interconnections specifications. In more details we present the SpaceWire interconnection technology. Have been developed for space on-board systems applications the SpaceWire has important features and characteristics that make it a prospective interconnection for wide range of embedded systems.

  4. National Centers for Environmental Prediction

    Science.gov Websites

    Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration /EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION / DIAGNOSTICS Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS

  5. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  6. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  7. Random Number Generation for High Performance Computing

    DTIC Science & Technology

    2015-01-01

    number streams, a quality metric for the parallel random number streams. * * * * * Atty. Dkt . No.: 5660-14400 Customer No. 35690 Eric B. Meyertons...responsibility to ensure timely payment of maintenance fees when due. Pagel of3 PTOL-85 (Rev. 02/11) Atty. Dkt . No.: 5660-14400 Page 1 Meyertons...with each subtask executed by a separate thread or process (henceforth, process). Each process has Atty. Dkt . No.: 5660-14400 Page 2 Meyertons

  8. [QUIPS: quality improvement in postoperative pain management].

    PubMed

    Meissner, Winfried

    2011-01-01

    Despite the availability of high-quality guidelines and advanced pain management techniques acute postoperative pain management is still far from being satisfactory. The QUIPS (Quality Improvement in Postoperative Pain Management) project aims to improve treatment quality by means of standardised data acquisition, analysis of quality and process indicators, and feedback and benchmarking. During a pilot phase funded by the German Ministry of Health (BMG), a total of 12,389 data sets were collected from six participating hospitals. Outcome improved in four of the six hospitals. Process indicators, such as routine pain documentation, were only poorly correlated with outcomes. To date, more than 130 German hospitals use QUIPS as a routine quality management tool. An EC-funded parallel project disseminates the concept internationally. QUIPS demonstrates that patient-reported outcomes in postoperative pain management can be benchmarked in routine clinical practice. Quality improvement initiatives should use outcome instead of structural and process parameters. The concept is transferable to other fields of medicine. Copyright © 2011. Published by Elsevier GmbH.

  9. MRI of the wrist at 7 tesla using an eight-channel array coil combined with parallel imaging: preliminary results.

    PubMed

    Chang, Gregory; Friedrich, Klaus M; Wang, Ligong; Vieira, Renata L R; Schweitzer, Mark E; Recht, Michael P; Wiggins, Graham C; Regatte, Ravinder R

    2010-03-01

    To determine the feasibility of performing MRI of the wrist at 7 Tesla (T) with parallel imaging and to evaluate how acceleration factors (AF) affect signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and image quality. This study had institutional review board approval. A four-transmit eight-receive channel array coil was constructed in-house. Nine healthy subjects were scanned on a 7T whole-body MR scanner. Coronal and axial images of cartilage and trabecular bone micro-architecture (3D-Fast Low Angle Shot (FLASH) with and without fat suppression, repetition time/echo time = 20 ms/4.5 ms, flip angle = 10 degrees , 0.169-0.195 x 0.169-0.195 mm, 0.5-1 mm slice thickness) were obtained with AF 1, 2, 3, 4. T1-weighted fast spin-echo (FSE), proton density-weighted FSE, and multiple-echo data image combination (MEDIC) sequences were also performed. SNR and CNR were measured. Three musculoskeletal radiologists rated image quality. Linear correlation analysis and paired t-tests were performed. At higher AF, SNR and CNR decreased linearly for cartilage, muscle, and trabecular bone (r < -0.98). At AF 4, reductions in SNR/CNR were:52%/60% (cartilage), 72%/63% (muscle), 45%/50% (trabecular bone). Radiologists scored images with AF 1 and 2 as near-excellent, AF 3 as good-to-excellent (P = 0.075), and AF 4 as average-to-good (P = 0.11). It is feasible to perform high resolution 7T MRI of the wrist with parallel imaging. SNR and CNR decrease with higher AF, but image quality remains above-average.

  10. In vivo verification of particle therapy: how Compton camera configurations affect 3D image quality

    NASA Astrophysics Data System (ADS)

    Mackin, D.; Draeger, E.; Peterson, S.; Polf, J.; Beddar, S.

    2017-05-01

    The steep dose gradients enabled by the Bragg peaks of particle therapy beams are a double edged sword. They enable highly conformal dose distributions, but even small deviations from the planned beam range can cause overdosing of healthy tissue or under-dosing of the tumour. To reduce this risk, particle therapy treatment plans include margins large enough to account for all the sources of range uncertainty, which include patient setup errors, patient anatomy changes, and CT number to stopping power ratios. Any system that could verify the beam range in vivo, would allow reduced margins and more conformal dose distributions. Toward our goal developing such a system based on Compton camera (CC) imaging, we studied how three configurations (single camera, parallel opposed, and orthogonal) affect the quality of the 3D images. We found that single CC and parallel opposed configurations produced superior images in 2D. The increase in parallax produced by an orthogonal CC configuration was shown to be beneficial in producing artefact free 3D images.

  11. Multi-GPU Acceleration of Branchless Distance Driven Projection and Backprojection for Clinical Helical CT.

    PubMed

    Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2017-01-01

    Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.

  12. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.

  13. Adaptive conversion of a high-order mode beam into a near-diffraction-limited beam.

    PubMed

    Zhao, Haichuan; Wang, Xiaolin; Ma, Haotong; Zhou, Pu; Ma, Yanxing; Xu, Xiaojun; Zhao, Yijun

    2011-08-01

    We present a new method for efficiently transforming a high-order mode beam into a nearly Gaussian beam with much higher beam quality. The method is based on modulation of phases of different lobes by stochastic parallel gradient descent algorithm and coherent addition after phase flattening. We demonstrate the method by transforming an LP11 mode into a nearly Gaussian beam. The experimental results reveal that the power in the diffraction-limited bucket in the far field is increased by more than a factor of 1.5.

  14. The artificial retina processor for track reconstruction at the LHC crossing rate

    DOE PAGES

    Abba, A.; Bedeschi, F.; Citterio, M.; ...

    2015-03-16

    We present results of an R&D study for a specialized processor capable of precisely reconstructing, in pixel detectors, hundreds of charged-particle tracks from high-energy collisions at 40 MHz rate. We apply a highly parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature, and describe in detail an efficient hardware implementation in high-speed, high-bandwidth FPGA devices. This is the first detailed demonstration of reconstruction of offline-quality tracks at 40 MHz and makes the device suitable for processing Large Hadron Collider events at the full crossing frequency.

  15. Use of Whatman-41 filters in air quality sampling networks (with applications to elemental analysis)

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Sidik, S. M.; King, R. B.; Fordyce, J. S.; Burr, J. C.

    1974-01-01

    The operation of a 16-site parallel high volume air sampling network with glass fiber filters on one unit and Whatman-41 filters on the other is reported. The network data and data from several other experiments indicate that (1) Sampler-to-sampler and filter-to-filter variabilities are small; (2) hygroscopic affinity of Whatman-41 filters need not introduce errors; and (3) suspended particulate samples from glass fiber filters averaged slightly, but not statistically significantly, higher than from Whatman-41-filters. The results obtained demonstrate the practicability of Whatman-41 filters for air quality monitoring and elemental analysis.

  16. MASQOT: a method for cDNA microarray spot quality control

    PubMed Central

    Bylesjö, Max; Eriksson, Daniel; Sjödin, Andreas; Sjöström, Michael; Jansson, Stefan; Antti, Henrik; Trygg, Johan

    2005-01-01

    Background cDNA microarray technology has emerged as a major player in the parallel detection of biomolecules, but still suffers from fundamental technical problems. Identifying and removing unreliable data is crucial to prevent the risk of receiving illusive analysis results. Visual assessment of spot quality is still a common procedure, despite the time-consuming work of manually inspecting spots in the range of hundreds of thousands or more. Results A novel methodology for cDNA microarray spot quality control is outlined. Multivariate discriminant analysis was used to assess spot quality based on existing and novel descriptors. The presented methodology displays high reproducibility and was found superior in identifying unreliable data compared to other evaluated methodologies. Conclusion The proposed methodology for cDNA microarray spot quality control generates non-discrete values of spot quality which can be utilized as weights in subsequent analysis procedures as well as to discard spots of undesired quality using the suggested threshold values. The MASQOT approach provides a consistent assessment of spot quality and can be considered an alternative to the labor-intensive manual quality assessment process. PMID:16223442

  17. Unstable Resonator Optical Parametric Oscillator Based on Quasi-Phase-Matched RbTiOAsO(4).

    PubMed

    Hansson, G; Karlsson, H; Laurell, F

    2001-10-20

    We demonstrate improved signal and idler-beam quality of a 3-mm-aperture quasi-phase-matched RbTiOAsO(4) optical parametric oscillator through use of a confocal unstable resonator as compared with a plane-parallel resonator. Both oscillators were singly resonant, and the periodically poled RbTiOAsO(4) crystal generated a signal at 1.56 mum and an idler at 3.33 mum when pumped at 1.064 mum. We compared the beam quality produced by the 1.2-magnification confocal unstable resonator with the beam quality produced by the plane-parallel resonator by measuring the signal and the idler beam M(2) value. We also investigated the effect of pump-beam intensity distribution by comparing the result of a Gaussian and a top-hat intensity profile pump beam. We generated a signal beam of M(2) approximately 7 and an idler beam of M(2) approximately 2.5 through use of an unstable resonator and a Gaussian intensity profile pump beam. This corresponds to an increase of a factor of approximately 2 in beam quality for the signal and a factor of 3 for the idler, compared with the beam quality of the plane-parallel resonator optical parametric oscillator.

  18. Balancing exploration, uncertainty and computational demands in many objective reservoir optimization

    NASA Astrophysics Data System (ADS)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea

    2017-11-01

    Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a promising new set of tools for effectively balancing exploration, uncertainty, and computational demands when using EMODPS.

  19. "One-Stop Shop": Free-Breathing Dynamic Contrast-Enhanced Magnetic Resonance Imaging of the Kidney Using Iterative Reconstruction and Continuous Golden-Angle Radial Sampling.

    PubMed

    Riffel, Philipp; Zoellner, Frank G; Budjan, Johannes; Grimm, Robert; Block, Tobias K; Schoenberg, Stefan O; Hausmann, Daniel

    2016-11-01

    The purpose of the present study was to evaluate a recently introduced technique for free-breathing dynamic contrast-enhanced renal magnetic resonance imaging (MRI) applying a combination of radial k-space sampling, parallel imaging, and compressed sensing. The technique allows retrospective reconstruction of 2 motion-suppressed sets of images from the same acquisition: one with lower temporal resolution but improved image quality for subjective image analysis, and one with high temporal resolution for quantitative perfusion analysis. In this study, 25 patients underwent a kidney examination, including a prototypical fat-suppressed, golden-angle radial stack-of-stars T1-weighted 3-dimensional spoiled gradient-echo examination (GRASP) performed after contrast agent administration during free breathing. Images were reconstructed at temporal resolutions of 55 spokes per frame (6.2 seconds) and 13 spokes per frame (1.5 seconds). The GRASP images were evaluated by 2 blinded radiologists. First, the reconstructions with low temporal resolution underwent subjective image analysis: the radiologists assessed the best arterial phase and the best renal phase and rated image quality score for each patient on a 5-point Likert-type scale.In addition, the diagnostic confidence was rated according to a 3-point Likert-type scale. Similarly, respiratory motion artifacts and streak artifacts were rated according to a 3-point Likert-type scale.Then, the reconstructions with high temporal resolution were analyzed with a voxel-by-voxel deconvolution approach to determine the renal plasma flow, and the results were compared with values reported in previous literature. Reader 1 and reader 2 rated the overall image quality score for the best arterial phase and the best renal phase with a median image quality score of 4 (good image quality) for both phases, respectively. A high diagnostic confidence (median score of 3) was observed. There were no respiratory motion artifacts in any of the patients. Streak artifacts were present in all of the patients, but did not compromise diagnostic image quality.The estimated renal plasma flow was slightly higher (295 ± 78 mL/100 mL per minute) than reported in previous MRI-based studies, but also closer to the physiologically expected value. Dynamic, motion-suppressed contrast-enhanced renal MRI can be performed in high diagnostic quality during free breathing using a combination of golden-angle radial sampling, parallel imaging, and compressed sensing. Both morphologic and quantitative functional information can be acquired within a single acquisition.

  20. Data Acquisition and Linguistic Resources

    NASA Astrophysics Data System (ADS)

    Strassel, Stephanie; Christianson, Caitlin; McCary, John; Staderman, William; Olive, Joseph

    All human language technology demands substantial quantities of data for system training and development, plus stable benchmark data to measure ongoing progress. While creation of high quality linguistic resources is both costly and time consuming, such data has the potential to profoundly impact not just a single evaluation program but language technology research in general. GALE's challenging performance targets demand linguistic data on a scale and complexity never before encountered. Resources cover multiple languages (Arabic, Chinese, and English) and multiple genres -- both structured (newswire and broadcast news) and unstructured (web text, including blogs and newsgroups, and broadcast conversation). These resources include significant volumes of monolingual text and speech, parallel text, and transcribed audio combined with multiple layers of linguistic annotation, ranging from word aligned parallel text and Treebanks to rich semantic annotation.

  1. Inspection criteria ensure quality control of parallel gap soldering

    NASA Technical Reports Server (NTRS)

    Burka, J. A.

    1968-01-01

    Investigation of parallel gap soldering of electrical leads resulted in recommendation on material preparation, equipment, process control, and visual inspection criteria to ensure reliable solder joints. The recommendations will minimize problems in heat-dwell time, amount of solder, bridging conductors, and damage of circuitry.

  2. The effect of curve sawing two-sided cants from small diameter hardwood sawlogs on lumber and pallet part yields

    Treesearch

    Peter Hamner; Marshall S. White; Philip A. Araman

    2006-01-01

    Curve sawing is a primary log breakdown process that incorporates gang-saw technology to allow two-sided cants from logs with sweep to be cut parallel to the log surface or log axis. Since curve-sawn logs with sweep are cut along the grain, the potential for producing high quality straight-grain lumber and cants increases, and strength, stiffness, and dimensional...

  3. Pulsar Emission Geometry and Accelerating Field Strength

    NASA Technical Reports Server (NTRS)

    DeCesar, Megan E.; Harding, Alice K.; Miller, M. Coleman; Kalapotharakos, Constantinos; Parent, Damien

    2012-01-01

    The high-quality Fermi LAT observations of gamma-ray pulsars have opened a new window to understanding the generation mechanisms of high-energy emission from these systems, The high statistics allow for careful modeling of the light curve features as well as for phase resolved spectral modeling. We modeled the LAT light curves of the Vela and CTA I pulsars with simulated high-energy light curves generated from geometrical representations of the outer gap and slot gap emission models. within the vacuum retarded dipole and force-free fields. A Markov Chain Monte Carlo maximum likelihood method was used to explore the phase space of the magnetic inclination angle, viewing angle. maximum emission radius, and gap width. We also used the measured spectral cutoff energies to estimate the accelerating parallel electric field dependence on radius. under the assumptions that the high-energy emission is dominated by curvature radiation and the geometry (radius of emission and minimum radius of curvature of the magnetic field lines) is determined by the best fitting light curves for each model. We find that light curves from the vacuum field more closely match the observed light curves and multiwavelength constraints, and that the calculated parallel electric field can place additional constraints on the emission geometry

  4. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  5. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  6. An intelligent architecture based on Field Programmable Gate Arrays designed to detect moving objects by using Principal Component Analysis.

    PubMed

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.

  7. HARP: A Dynamic Inertial Spectral Partitioner

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Sohn, Andrew; Biswas, Rupak

    1997-01-01

    Partitioning unstructured graphs is central to the parallel solution of computational science and engineering problems. Spectral partitioners, such recursive spectral bisection (RSB), have proven effecfive in generating high-quality partitions of realistically-sized meshes. The major problem which hindered their wide-spread use was their long execution times. This paper presents a new inertial spectral partitioner, called HARP. The main objective of the proposed approach is to quickly partition the meshes at runtime in a manner that works efficiently for real applications in the context of distributed-memory machines. The underlying principle of HARP is to find the eigenvectors of the unpartitioned vertices and then project them onto the eigerivectors of the original mesh. Results for various meshes ranging in size from 1000 to 100,000 vertices indicate that HARP can indeed partition meshes rapidly at runtime. Experimental results show that our largest mesh can be partitioned sequentially in only a few seconds on an SP2 which is several times faster than other spectral partitioners while maintaining the solution quality of the proven RSB method. A parallel WI version of HARP has also been implemented on IBM SP2 and Cray T3E. Parallel HARP, running on 64 processors SP2 and T3E, can partition a mesh containing more than 100,000 vertices into 64 subgrids in about half a second. These results indicate that graph partitioning can now be truly embedded in dynamically-changing real-world applications.

  8. Methods for Ensuring High Quality of Coding of Cause of Death. The Mortality Register to Follow Southern Urals Populations Exposed to Radiation.

    PubMed

    Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A

    2015-01-01

    To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70  - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.

  9. Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4D-MRI for online MR-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Mickevicius, Nikolai J.; Paulson, Eric S.

    2017-04-01

    The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.

  10. Growth and Photovoltaic Properties of High-Quality GaAs Nanowires Prepared by the Two-Source CVD Method.

    PubMed

    Wang, Ying; Yang, Zaixing; Wu, Xiaofeng; Han, Ning; Liu, Hanyu; Wang, Shuobo; Li, Jun; Tse, WaiMan; Yip, SenPo; Chen, Yunfa; Ho, Johnny C

    2016-12-01

    Growing high-quality and low-cost GaAs nanowires (NWs) as well as fabricating high-performance NW solar cells by facile means is an important development towards the cost-effective next-generation photovoltaics. In this work, highly crystalline, dense, and long GaAs NWs are successfully synthesized using a two-source method on non-crystalline SiO2 substrates by a simple solid-source chemical vapor deposition method. The high V/III ratio and precursor concentration enabled by this two-source configuration can significantly benefit the NW growth and suppress the crystal defect formation as compared with the conventional one-source system. Since less NW crystal defects would contribute fewer electrons being trapped by the surface oxides, the p-type conductivity is then greatly enhanced as revealed by the electrical characterization of fabricated NW devices. Furthermore, the individual single NW and high-density NW parallel arrays achieved by contact printing can be effectively fabricated into Schottky barrier solar cells simply by employing asymmetric Ni-Al contacts, along with an open circuit voltage of ~0.3 V. All these results indicate the technological promise of these high-quality two-source grown GaAs NWs, especially for the realization of facile Schottky solar cells utilizing the asymmetric Ni-Al contact.

  11. Breaking Barriers to Low-Cost Modular Inverter Production & Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogdan Borowy; Leo Casey; Jerry Foshage

    2005-05-31

    The goal of this cost share contract is to advance key technologies to reduce size, weight and cost while enhancing performance and reliability of Modular Inverter Product for Distributed Energy Resources (DER). Efforts address technology development to meet technical needs of DER market protection, isolation, reliability, and quality. Program activities build on SatCon Technology Corporation inverter experience (e.g., AIPM, Starsine, PowerGate) for Photovoltaic, Fuel Cell, Energy Storage applications. Efforts focused four technical areas, Capacitors, Cooling, Voltage Sensing and Control of Parallel Inverters. Capacitor efforts developed a hybrid capacitor approach for conditioning SatCon's AIPM unit supply voltages by incorporating several typesmore » and sizes to store energy and filter at high, medium and low frequencies while minimizing parasitics (ESR and ESL). Cooling efforts converted the liquid cooled AIPM module to an air-cooled unit using augmented fin, impingement flow cooling. Voltage sensing efforts successfully modified the existing AIPM sensor board to allow several, application dependent configurations and enabling voltage sensor galvanic isolation. Parallel inverter control efforts realized a reliable technique to control individual inverters, connected in a parallel configuration, without a communication link. Individual inverter currents, AC and DC, were balanced in the paralleled modules by introducing a delay to the individual PWM gate pulses. The load current sharing is robust and independent of load types (i.e., linear and nonlinear, resistive and/or inductive). It is a simple yet powerful method for paralleling both individual devices dramatically improves reliability and fault tolerance of parallel inverter power systems. A patent application has been made based on this control technology.« less

  12. 76 FR 2853 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Infrastructure State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-18

    ... technical analysis submitted for parallel-processing by DNREC on December 9, 2010, to address significant... technical analysis submitted by DNREC for parallel-processing on December 9, 2010, to satisfy the... consists of a technical analysis that provides detailed support for Delaware's position that it has...

  13. National Centers for Environmental Prediction

    Science.gov Websites

    Reference List Table of Contents NCEP OPERATIONAL MODEL FORECAST GRAPHICS PARALLEL/EXPERIMENTAL MODEL Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS VERIFICATION (GRID VS.OBS) WEB PAGE (NCEP EXPERIMENTAL PAGE, INTERNAL USE ONLY) Interactive web page tool for

  14. Self-balanced modulation and magnetic rebalancing method for parallel multilevel inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hui; Shi, Yanjun

    A self-balanced modulation method and a closed-loop magnetic flux rebalancing control method for parallel multilevel inverters. The combination of the two methods provides for balancing of the magnetic flux of the inter-cell transformers (ICTs) of the parallel multilevel inverters without deteriorating the quality of the output voltage. In various embodiments a parallel multi-level inverter modulator is provide including a multi-channel comparator to generate a multiplexed digitized ideal waveform for a parallel multi-level inverter and a finite state machine (FSM) module coupled to the parallel multi-channel comparator, the FSM module to receive the multiplexed digitized ideal waveform and to generate amore » pulse width modulated gate-drive signal for each switching device of the parallel multi-level inverter. The system and method provides for optimization of the output voltage spectrum without influence the magnetic balancing.« less

  15. Optical registration of spaceborne low light remote sensing camera

    NASA Astrophysics Data System (ADS)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  16. Studying Air Quality with Data from the Internet.

    ERIC Educational Resources Information Center

    Salter, Leo; Parsons, Barbara

    2000-01-01

    Explains how the internet can be used between institutions for parallel research opportunities. Uses air quality data to examine the relationship between traffic flow and atmospheric particulate matter (PM) values. (Author/YDS)

  17. Parallel solution of the symmetric tridiagonal eigenproblem. Research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-10-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed-memory Multiple Instruction, Multiple Data multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speed up, and accuracy. Experiments on an IPSC hypercube multiprocessor reveal that Cuppen's method ismore » the most accurate approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effect of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptions of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  18. Parallel solution of the symmetric tridiagonal eigenproblem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jessup, E.R.

    1989-01-01

    This thesis discusses methods for computing all eigenvalues and eigenvectors of a symmetric tridiagonal matrix on a distributed memory MIMD multiprocessor. Only those techniques having the potential for both high numerical accuracy and significant large-grained parallelism are investigated. These include the QL method or Cuppen's divide and conquer method based on rank-one updating to compute both eigenvalues and eigenvectors, bisection to determine eigenvalues, and inverse iteration to compute eigenvectors. To begin, the methods are compared with respect to computation time, communication time, parallel speedup, and accuracy. Experiments on an iPSC hyper-cube multiprocessor reveal that Cuppen's method is the most accuratemore » approach, but bisection with inverse iteration is the fastest and most parallel. Because the accuracy of the latter combination is determined by the quality of the computed eigenvectors, the factors influencing the accuracy of inverse iteration are examined. This includes, in part, statistical analysis of the effects of a starting vector with random components. These results are used to develop an implementation of inverse iteration producing eigenvectors with lower residual error and better orthogonality than those generated by the EISPACK routine TINVIT. This thesis concludes with adaptations of methods for the symmetric tridiagonal eigenproblem to the related problem of computing the singular value decomposition (SVD) of a bidiagonal matrix.« less

  19. Recent Vertical External Cavity Surface Emitting Lasers (VECSELs) Developments for Sensor Applications (POSTPRINT)

    DTIC Science & Technology

    2013-02-01

    edge-emitting strained InxGa1−xSb/AlyGa1−ySb quantum well struc- tures using solid-source molecular beam epitaxy (MBE) with varying barrier heights...intersubband quantum wells. The most common high-power edge-emitting semiconductor lasers suffter from poor beam quality, due primarily to the linewidth...reduces the power scalability of semiconductor lasers. In vertical cavity surface emitting lasers ( VCSELs ), light propagates parallel to the growth

  20. Tunneling magnetoresistance tuned by a vertical electric field in an AA-stacked graphene bilayer with double magnetic barriers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali, E-mail: wangdali@mail.ahnu.edu.cn; National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093; Jin, Guojun, E-mail: gjin@nju.edu.cn

    2013-12-21

    We investigate the effect of a vertical electric field on the electron tunneling and magnetoresistance in an AA-stacked graphene bilayer modulated by the double magnetic barriers with parallel or antiparallel configuration. The results show that the electronic transmission properties in the system are sensitive to the magnetic-barrier configuration and the bias voltage between the graphene layers. In particular, it is found that for the antiparallel configuration, within the low energy region, the blocking effect is more obvious compared with the case for the parallel configuration, and even there may exist a transmission spectrum gap which can be arbitrarily tuned bymore » the field-induced interlayer bias voltage. We also demonstrate that the significant discrepancy between the conductance for both parallel and antiparallel configurations would result in a giant tunneling magnetoresistance ratio, and further the maximal magnetoresistance ratio can be strongly modified by the interlayer bias voltage. This leads to the possible realization of high-quality magnetic sensors controlled by a vertical electric field in the AA-stacked graphene bilayer.« less

  1. A 12-bit high-speed column-parallel two-step single-slope analog-to-digital converter (ADC) for CMOS image sensors.

    PubMed

    Lyu, Tao; Yao, Suying; Nie, Kaiming; Xu, Jiangtao

    2014-11-17

    A 12-bit high-speed column-parallel two-step single-slope (SS) analog-to-digital converter (ADC) for CMOS image sensors is proposed. The proposed ADC employs a single ramp voltage and multiple reference voltages, and the conversion is divided into coarse phase and fine phase to improve the conversion rate. An error calibration scheme is proposed to correct errors caused by offsets among the reference voltages. The digital-to-analog converter (DAC) used for the ramp generator is based on the split-capacitor array with an attenuation capacitor. Analysis of the DAC's linearity performance versus capacitor mismatch and parasitic capacitance is presented. A prototype 1024 × 32 Time Delay Integration (TDI) CMOS image sensor with the proposed ADC architecture has been fabricated in a standard 0.18 μm CMOS process. The proposed ADC has average power consumption of 128 μW and a conventional rate 6 times higher than the conventional SS ADC. A high-quality image, captured at the line rate of 15.5 k lines/s, shows that the proposed ADC is suitable for high-speed CMOS image sensors.

  2. Comparison of laser regimes for stamp cleaning

    NASA Astrophysics Data System (ADS)

    Radvan, Roxana N.; Dan, Suzana; Popovici, Nicoleta; Striber, J.; Savastru, Dan; Savastru, Roxana

    2001-10-01

    This paper presents a comparative study of the laser cleaning regimes applied to colored substrates with various chromatic characteristics, including colored paper and printed paper with different dpi (dots per inch) values. Tests are done under microscope with high precision techniques, using controlled Nd:YAG laser. The wavelength preponderantly used in the experiments is the Nd:YAG fundamental regime (1064 nm). Parallel experiments at 532 nm have been developed on difficult cases, or when the results were not satisfactory with 1064 nm. The main part of the work presents some results on stamp cleaning. Experimental results indicate that cleaning efficiency is correlated with the color of substrate, age of the ink on the stamp, color quality and paper quality.

  3. The relationship between consumer insight and provider-consumer agreement regarding consumer's quality of life.

    PubMed

    Hasson-Ohayon, Ilanit; Roe, David; Kravetz, Shlomo; Levy-Frank, Itamar; Meir, Taly

    2011-10-01

    This study examined the relationship between insight and mental health consumers and providers agreement regarding consumers rated quality of life (QoL). Seventy mental health consumers and their 23 care providers filled-out parallel questionnaires designed to measure consumer QoL. Consumers' insight was also assessed. For most QoL domains, agreement between consumers and providers was higher for persons with high insight. For the Psychological well being dimension a negative correlation was uncovered for persons with low insight indicating disagreement between consumer and provider. These findings are discussed within the context of the literature on insight and agreement between consumer and provider as related to the therapeutic alliance.

  4. A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy.

    PubMed

    Miyashita, Kiyoteru; Oude Vrielink, Timo; Mylonas, George

    2018-05-01

    Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.

  5. Parallel Processing at the High School Level.

    ERIC Educational Resources Information Center

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  6. High-performance computing — an overview

    NASA Astrophysics Data System (ADS)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  7. Moose: An Open-Source Framework to Enable Rapid Development of Collaborative, Multi-Scale, Multi-Physics Simulation Tools

    NASA Astrophysics Data System (ADS)

    Slaughter, A. E.; Permann, C.; Peterson, J. W.; Gaston, D.; Andrs, D.; Miller, J.

    2014-12-01

    The Idaho National Laboratory (INL)-developed Multiphysics Object Oriented Simulation Environment (MOOSE; www.mooseframework.org), is an open-source, parallel computational framework for enabling the solution of complex, fully implicit multiphysics systems. MOOSE provides a set of computational tools that scientists and engineers can use to create sophisticated multiphysics simulations. Applications built using MOOSE have computed solutions for chemical reaction and transport equations, computational fluid dynamics, solid mechanics, heat conduction, mesoscale materials modeling, geomechanics, and others. To facilitate the coupling of diverse and highly-coupled physical systems, MOOSE employs the Jacobian-free Newton-Krylov (JFNK) method when solving the coupled nonlinear systems of equations arising in multiphysics applications. The MOOSE framework is written in C++, and leverages other high-quality, open-source scientific software packages such as LibMesh, Hypre, and PETSc. MOOSE uses a "hybrid parallel" model which combines both shared memory (thread-based) and distributed memory (MPI-based) parallelism to ensure efficient resource utilization on a wide range of computational hardware. MOOSE-based applications are inherently modular, which allows for simulation expansion (via coupling of additional physics modules) and the creation of multi-scale simulations. Any application developed with MOOSE supports running (in parallel) any other MOOSE-based application. Each application can be developed independently, yet easily communicate with other applications (e.g., conductivity in a slope-scale model could be a constant input, or a complete phase-field micro-structure simulation) without additional code being written. This method of development has proven effective at INL and expedites the development of sophisticated, sustainable, and collaborative simulation tools.

  8. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.

  9. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, D.B.

    1996-12-31

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor to a plurality of slave processors to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor`s status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer, a digital signal processor, a parallel transfer controller, and two three-port memory devices. A communication switch within each node connects it to a fast parallel hardware channel through which all high density data arrives or leaves the node. 6 figs.

  10. Parallel processing data network of master and slave transputers controlled by a serial control network

    DOEpatents

    Crosetto, Dario B.

    1996-01-01

    The present device provides for a dynamically configurable communication network having a multi-processor parallel processing system having a serial communication network and a high speed parallel communication network. The serial communication network is used to disseminate commands from a master processor (100) to a plurality of slave processors (200) to effect communication protocol, to control transmission of high density data among nodes and to monitor each slave processor's status. The high speed parallel processing network is used to effect the transmission of high density data among nodes in the parallel processing system. Each node comprises a transputer (104), a digital signal processor (114), a parallel transfer controller (106), and two three-port memory devices. A communication switch (108) within each node (100) connects it to a fast parallel hardware channel (70) through which all high density data arrives or leaves the node.

  11. Waveguide device and method for making same

    DOEpatents

    Forman, Michael A [San Francisco, CA

    2007-08-14

    A monolithic micromachined waveguide device or devices with low-loss, high-power handling, and near-optical frequency ranges is set forth. The waveguide and integrated devices are capable of transmitting near-optical frequencies due to optical-quality sidewall roughness. The device or devices are fabricated in parallel, may be mass produced using a LIGA manufacturing process, and may include a passive component such as a diplexer and/or an active capping layer capable of particularized signal processing of the waveforms propagated by the waveguide.

  12. Monte Carlo calculations of electron beam quality conversion factors for several ion chamber types.

    PubMed

    Muir, B R; Rogers, D W O

    2014-11-01

    To provide a comprehensive investigation of electron beam reference dosimetry using Monte Carlo simulations of the response of 10 plane-parallel and 18 cylindrical ion chamber types. Specific emphasis is placed on the determination of the optimal shift of the chambers' effective point of measurement (EPOM) and beam quality conversion factors. The EGSnrc system is used for calculations of the absorbed dose to gas in ion chamber models and the absorbed dose to water as a function of depth in a water phantom on which cobalt-60 and several electron beam source models are incident. The optimal EPOM shifts of the ion chambers are determined by comparing calculations of R50 converted from I50 (calculated using ion chamber simulations in phantom) to R50 calculated using simulations of the absorbed dose to water vs depth in water. Beam quality conversion factors are determined as the calculated ratio of the absorbed dose to water to the absorbed dose to air in the ion chamber at the reference depth in a cobalt-60 beam to that in electron beams. For most plane-parallel chambers, the optimal EPOM shift is inside of the active cavity but different from the shift determined with water-equivalent scaling of the front window of the chamber. These optimal shifts for plane-parallel chambers also reduce the scatter of beam quality conversion factors, kQ, as a function of R50. The optimal shift of cylindrical chambers is found to be less than the 0.5 rcav recommended by current dosimetry protocols. In most cases, the values of the optimal shift are close to 0.3 rcav. Values of kecal are calculated and compared to those from the TG-51 protocol and differences are explained using accurate individual correction factors for a subset of ion chambers investigated. High-precision fits to beam quality conversion factors normalized to unity in a beam with R50 = 7.5 cm (kQ (')) are provided. These factors avoid the use of gradient correction factors as used in the TG-51 protocol although a chamber dependent optimal shift in the EPOM is required when using plane-parallel chambers while no shift is needed with cylindrical chambers. The sensitivity of these results to parameters used to model the ion chambers is discussed and the uncertainty related to the practical use of these results is evaluated. These results will prove useful as electron beam reference dosimetry protocols are being updated. The analysis of this work indicates that cylindrical ion chambers may be appropriate for use in low-energy electron beams but measurements are required to characterize their use in these beams.

  13. A Structure-Based Distance Metric for High-Dimensional Space Exploration with Multi-Dimensional Scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla

    2014-03-01

    Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly inmore » high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.« less

  14. Power transformation for enhancing responsiveness of quality of life questionnaire.

    PubMed

    Zhou, YanYan Ange

    2015-01-01

    We investigate the effect of power transformation of raw scores on the responsiveness of quality of life survey. The procedure maximizes the paired t-test value on the power transformed data to obtain an optimal power range. The parallel between the Box-Cox transformation is also investigated for the quality of life data.

  15. The diploid genome sequence of an Asian individual

    PubMed Central

    Wang, Jun; Wang, Wei; Li, Ruiqiang; Li, Yingrui; Tian, Geng; Goodman, Laurie; Fan, Wei; Zhang, Junqing; Li, Jun; Zhang, Juanbin; Guo, Yiran; Feng, Binxiao; Li, Heng; Lu, Yao; Fang, Xiaodong; Liang, Huiqing; Du, Zhenglin; Li, Dong; Zhao, Yiqing; Hu, Yujie; Yang, Zhenzhen; Zheng, Hancheng; Hellmann, Ines; Inouye, Michael; Pool, John; Yi, Xin; Zhao, Jing; Duan, Jinjie; Zhou, Yan; Qin, Junjie; Ma, Lijia; Li, Guoqing; Yang, Zhentao; Zhang, Guojie; Yang, Bin; Yu, Chang; Liang, Fang; Li, Wenjie; Li, Shaochuan; Li, Dawei; Ni, Peixiang; Ruan, Jue; Li, Qibin; Zhu, Hongmei; Liu, Dongyuan; Lu, Zhike; Li, Ning; Guo, Guangwu; Zhang, Jianguo; Ye, Jia; Fang, Lin; Hao, Qin; Chen, Quan; Liang, Yu; Su, Yeyang; san, A.; Ping, Cuo; Yang, Shuang; Chen, Fang; Li, Li; Zhou, Ke; Zheng, Hongkun; Ren, Yuanyuan; Yang, Ling; Gao, Yang; Yang, Guohua; Li, Zhuo; Feng, Xiaoli; Kristiansen, Karsten; Wong, Gane Ka-Shu; Nielsen, Rasmus; Durbin, Richard; Bolund, Lars; Zhang, Xiuqing; Li, Songgang; Yang, Huanming; Wang, Jian

    2009-01-01

    Here we present the first diploid genome sequence of an Asian individual. The genome was sequenced to 36-fold average coverage using massively parallel sequencing technology. We aligned the short reads onto the NCBI human reference genome to 99.97% coverage, and guided by the reference genome, we used uniquely mapped reads to assemble a high-quality consensus sequence for 92% of the Asian individual's genome. We identified approximately 3 million single-nucleotide polymorphisms (SNPs) inside this region, of which 13.6% were not in the dbSNP database. Genotyping analysis showed that SNP identification had high accuracy and consistency, indicating the high sequence quality of this assembly. We also carried out heterozygote phasing and haplotype prediction against HapMap CHB and JPT haplotypes (Chinese and Japanese, respectively), sequence comparison with the two available individual genomes (J. D. Watson and J. C. Venter), and structural variation identification. These variations were considered for their potential biological impact. Our sequence data and analyses demonstrate the potential usefulness of next-generation sequencing technologies for personal genomics. PMID:18987735

  16. Of Small Beauties and Large Beasts: The Quality of Distractors on Multiple-Choice Tests Is More Important than Their Quantity

    ERIC Educational Resources Information Center

    Papenberg, Martin; Musch, Jochen

    2017-01-01

    In multiple-choice tests, the quality of distractors may be more important than their number. We therefore examined the joint influence of distractor quality and quantity on test functioning by providing a sample of 5,793 participants with five parallel test sets consisting of items that differed in the number and quality of distractors.…

  17. Rapid indirect trajectory optimization on highly parallel computing architectures

    NASA Astrophysics Data System (ADS)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical long range weapon system. The techniques used to construct an initial guess from an analytic near-ballistic trajectory and the methods used to formulate the necessary conditions of optimality in a manner that is transparent to the designer are discussed. Various hypothetical mission scenarios that enforce different combinations of initial, terminal, interior point and path constraints demonstrate the rapid construction of complex trajectories without requiring any a-priori insight into the structure of the solutions. Trajectory problems of this kind were previously considered impractical to solve using indirect methods. The performance of the GPU-accelerated solver is found to be 2x--4x faster than MATLAB's bvp4c, even while running on GPU hardware that is five years behind the state-of-the-art.

  18. Resource Provisioning in SLA-Based Cluster Computing

    NASA Astrophysics Data System (ADS)

    Xiong, Kaiqi; Suh, Sang

    Cluster computing is excellent for parallel computation. It has become increasingly popular. In cluster computing, a service level agreement (SLA) is a set of quality of services (QoS) and a fee agreed between a customer and an application service provider. It plays an important role in an e-business application. An application service provider uses a set of cluster computing resources to support e-business applications subject to an SLA. In this paper, the QoS includes percentile response time and cluster utilization. We present an approach for resource provisioning in such an environment that minimizes the total cost of cluster computing resources used by an application service provider for an e-business application that often requires parallel computation for high service performance, availability, and reliability while satisfying a QoS and a fee negotiated between a customer and the application service provider. Simulation experiments demonstrate the applicability of the approach.

  19. Performance Enhancement Strategies for Multi-Block Overset Grid CFD Applications

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    The overset grid methodology has significantly reduced time-to-solution of highfidelity computational fluid dynamics (CFD) simulations about complex aerospace configurations. The solution process resolves the geometrical complexity of the problem domain by using separately generated but overlapping structured discretization grids that periodically exchange information through interpolation. However, high performance computations of such large-scale realistic applications must be handled efficiently on state-of-the-art parallel supercomputers. This paper analyzes the effects of various performance enhancement strategies on the parallel efficiency of an overset grid Navier-Stokes CFD application running on an SGI Origin2000 machinc. Specifically, the role of asynchronous communication, grid splitting, and grid grouping strategies are presented and discussed. Details of a sophisticated graph partitioning technique for grid grouping are also provided. Results indicate that performance depends critically on the level of latency hiding and the quality of load balancing across the processors.

  20. Human settlement history between Sunda and Sahul: a focus on East Timor (Timor-Leste) and the Pleistocenic mtDNA diversity.

    PubMed

    Gomes, Sibylle M; Bodner, Martin; Souto, Luis; Zimmermann, Bettina; Huber, Gabriela; Strobl, Christina; Röck, Alexander W; Achilli, Alessandro; Olivieri, Anna; Torroni, Antonio; Côrte-Real, Francisco; Parson, Walther

    2015-02-14

    Distinct, partly competing, "waves" have been proposed to explain human migration in(to) today's Island Southeast Asia and Australia based on genetic (and other) evidence. The paucity of high quality and high resolution data has impeded insights so far. In this study, one of the first in a forensic environment, we used the Ion Torrent Personal Genome Machine (PGM) for generating complete mitogenome sequences via stand-alone massively parallel sequencing and describe a standard data validation practice. In this first representative investigation on the mitochondrial DNA (mtDNA) variation of East Timor (Timor-Leste) population including >300 individuals, we put special emphasis on the reconstruction of the initial settlement, in particular on the previously poorly resolved haplogroup P1, an indigenous lineage of the Southwest Pacific region. Our results suggest a colonization of southern Sahul (Australia) >37 kya, limited subsequent exchange, and a parallel incubation of initial settlers in northern Sahul (New Guinea) followed by westward migrations <28 kya. The temporal proximity and possible coincidence of these latter dispersals, which encompassed autochthonous haplogroups, with the postulated "later" events of (South) East Asian origin pinpoints a highly dynamic migratory phase.

  1. High efficiency integration of three-dimensional functional microdevices inside a microfluidic chip by using femtosecond laser multifoci parallel microfabrication

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Du, Wen-Qiang; Li, Jia-Wen; Hu, Yan-Lei; Yang, Liang; Zhang, Chen-Chu; Li, Guo-Qiang; Lao, Zhao-Xin; Ni, Jin-Cheng; Chu, Jia-Ru; Wu, Dong; Liu, Su-Ling; Sugioka, Koji

    2016-01-01

    High efficiency fabrication and integration of three-dimension (3D) functional devices in Lab-on-a-chip systems are crucial for microfluidic applications. Here, a spatial light modulator (SLM)-based multifoci parallel femtosecond laser scanning technology was proposed to integrate microstructures inside a given ‘Y’ shape microchannel. The key novelty of our approach lies on rapidly integrating 3D microdevices inside a microchip for the first time, which significantly reduces the fabrication time. The high quality integration of various 2D-3D microstructures was ensured by quantitatively optimizing the experimental conditions including prebaking time, laser power and developing time. To verify the designable and versatile capability of this method for integrating functional 3D microdevices in microchannel, a series of microfilters with adjustable pore sizes from 12.2 μm to 6.7 μm were fabricated to demonstrate selective filtering of the polystyrene (PS) particles and cancer cells with different sizes. The filter can be cleaned by reversing the flow and reused for many times. This technology will advance the fabrication technique of 3D integrated microfluidic and optofluidic chips.

  2. Computer-aided programming for message-passing system; Problems and a solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M.Y.; Gajski, D.D.

    1989-12-01

    As the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. Parallel models of computation, parallelization problems, and tools for computer-aided programming (CAP) are discussed. As an example, a CAP tool that performs scheduling and inserts communication primitives automatically is described. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs.

  3. Hypothesis driven drug design: improving quality and effectiveness of the design-make-test-analyse cycle.

    PubMed

    Plowright, Alleyn T; Johnstone, Craig; Kihlberg, Jan; Pettersson, Jonas; Robb, Graeme; Thompson, Richard A

    2012-01-01

    In drug discovery, the central process of constructing and testing hypotheses, carefully conducting experiments and analysing the associated data for new findings and information is known as the design-make-test-analyse cycle. Each step relies heavily on the inputs and outputs of the other three components. In this article we report our efforts to improve and integrate all parts to enable smooth and rapid flow of high quality ideas. Key improvements include enhancing multi-disciplinary input into 'Design', increasing the use of knowledge and reducing cycle times in 'Make', providing parallel sets of relevant data within ten working days in 'Test' and maximising the learning in 'Analyse'. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Signal-domain optimization metrics for MPRAGE RF pulse design in parallel transmission at 7 tesla.

    PubMed

    Gras, V; Vignaud, A; Mauconduit, F; Luong, M; Amadon, A; Le Bihan, D; Boulant, N

    2016-11-01

    Standard radiofrequency pulse design strategies focus on minimizing the deviation of the flip angle from a target value, which is sufficient but not necessary for signal homogeneity. An alternative approach, based directly on the signal, here is proposed for the MPRAGE sequence, and is developed in the parallel transmission framework with the use of the k T -points parametrization. The flip angle-homogenizing and the proposed methods were investigated numerically under explicit power and specific absorption rate constraints and tested experimentally in vivo on a 7 T parallel transmission system enabling real time local specific absorption rate monitoring. Radiofrequency pulse performance was assessed by a careful analysis of the signal and contrast between white and gray matter. Despite a slight reduction of the flip angle uniformity, an improved signal and contrast homogeneity with a significant reduction of the specific absorption rate was achieved with the proposed metric in comparison with standard pulse designs. The proposed joint optimization of the inversion and excitation pulses enables significant reduction of the specific absorption rate in the MPRAGE sequence while preserving image quality. The work reported thus unveils a possible direction to increase the potential of ultra-high field MRI and parallel transmission. Magn Reson Med 76:1431-1442, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.

  5. TU-AB-202-05: GPU-Based 4D Deformable Image Registration Using Adaptive Tetrahedral Mesh Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Z; Zhuang, L; Gu, X

    Purpose: Deformable image registration (DIR) has been employed today as an automated and effective segmentation method to transfer tumor or organ contours from the planning image to daily images, instead of manual segmentation. However, the computational time and accuracy of current DIR approaches are still insufficient for online adaptive radiation therapy (ART), which requires real-time and high-quality image segmentation, especially in a large datasets of 4D-CT images. The objective of this work is to propose a new DIR algorithm, with fast computational speed and high accuracy, by using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step ismore » to generate the adaptive tetrahedral mesh based on the image features of a reference phase of 4D-CT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. Subsequently, the deformation vector fields (DVF) and other phases of 4D-CT can be obtained by matching each phase of the target 4D-CT images with the corresponding deformed reference phase. The proposed 4D DIR method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its parallel computing ability. Results: A 4D NCAT digital phantom was used to test the efficiency and accuracy of our method. Both the image and DVF results show that the fine structures and shapes of lung are well preserved, and the tumor position is well captured, i.e., 3D distance error is 1.14 mm. Compared to the previous voxel-based CPU implementation of DIR, such as demons, the proposed method is about 160x faster for registering a 10-phase 4D-CT with a phase dimension of 256×256×150. Conclusion: The proposed 4D DIR method uses feature-based mesh and GPU-based parallelism, which demonstrates the capability to compute both high-quality image and motion results, with significant improvement on the computational speed.« less

  6. On the need for quality assurance in superficial kilovoltage radiotherapy.

    PubMed

    Austerlitz, C; Mota, H; Gay, H; Campos, D; Allison, R; Sibata, C

    2008-01-01

    External auditing of beam output and energy qualities of four therapeutic X-ray machines were performed in three radiation oncology centres in northeastern Brazil. The output and half-value layers (HVLs) were determined using a parallel-plate ionisation chamber and high-purity aluminium foils, respectively. The obtained values of absorbed dose to water and energy qualities were compared with those obtained by the respective institutions. The impact on the prescribed dose was analysed by determining the half-value depth (D(1/2)). The beam outputs presented percent differences ranging from -13 to +25%. The ratio between the HVL in use by the institution and the measurements obtained in this study ranged from 0.75 to 2.33. Such deviations in HVL result in percent differences in dose at D(1/2) ranging from -52 to +8%. It was concluded that dosimetric quality audit programmes in radiation therapy should be expanded to include dermatological radiation therapy and such audits should include HVL verification.

  7. How to Assess Quality of Research in Iran, From Input to Impact? Introduction of Peer-Based Research Evaluation Model in Iran.

    PubMed

    Ebadifar, Asghar; Baradaran Eftekhari, Monir; Owlia, Parviz; Habibi, Elham; Ghalenoee, Elham; Bagheri, Mohammad Reza; Falahat, Katayoun; Eltemasi, Masoumeh; Sobhani, Zahra; Akhondzadeh, Shahin

    2017-11-01

    Research evaluation is a systematic and objective process to measure relevance, efficiency and effectiveness of research activities, and peer review is one of the most important tools for assessing quality of research. The aim of this study was introducing research evaluation indicators based on peer reviewing. This study was implemented in 4 stages. A list of objective-oriented evaluation indicators were designed in 4 axes, including; governance and leadership, structure, knowledge production and research impact. The top 10% medical sciences research centers (RCs) were evaluated based on peer review. Adequate equipment and laboratory instruments, high quality research publication and national or international cooperation were the main strengths in medical sciences RCs and the most important weaknesses included failure to adhere to strategic plans, parallel actions in similar fields, problems in manpower recruitment, knowledge translation & exchange (KTE) in service providers and policy makers' levels. Peer review evaluation can improve the quality of research.

  8. An extended algebraic reconstruction technique (E-ART) for dual spectral CT.

    PubMed

    Zhao, Yunsong; Zhao, Xing; Zhang, Peng

    2015-03-01

    Compared with standard computed tomography (CT), dual spectral CT (DSCT) has many advantages for object separation, contrast enhancement, artifact reduction, and material composition assessment. But it is generally difficult to reconstruct images from polychromatic projections acquired by DSCT, because of the nonlinear relation between the polychromatic projections and the images to be reconstructed. This paper first models the DSCT reconstruction problem as a nonlinear system problem; and then extend the classic ART method to solve the nonlinear system. One feature of the proposed method is its flexibility. It fits for any scanning configurations commonly used and does not require consistent rays for different X-ray spectra. Another feature of the proposed method is its high degree of parallelism, which means that the method is suitable for acceleration on GPUs (graphic processing units) or other parallel systems. The method is validated with numerical experiments from simulated noise free and noisy data. High quality images are reconstructed with the proposed method from the polychromatic projections of DSCT. The reconstructed images are still satisfactory even if there are certain errors in the estimated X-ray spectra.

  9. Advancing Data assimilation for Baltic Monitoring and Forecasting Center: implementation and evaluation of HBP-PDAF system

    NASA Astrophysics Data System (ADS)

    Korabel, Vasily; She, Jun; Huess, Vibeke; Woge Nielsen, Jacob; Murawsky, Jens; Nerger, Lars

    2017-04-01

    The potential of an efficient data assimilation (DA) scheme to improve model forecast skill was successfully demonstrated by many operational centres around the world. The Baltic-North Sea region is one of the most heavily monitored seas. Ferryboxes, buoys, ADCP moorings, shallow water Argo floats, and research vessels are providing more and more near-real time observations. Coastal altimetry has now providing increasing amount of high resolution sea level observations, which will be significantly expanded by the launch of SWOT satellite in next years. This will turn operational DA into a valuable tool for improving forecast quality in the region. This motivated us to focus on advancing DA for the Baltic Monitoring and Forecasting Centre (BAL MFC) in order to create a common framework for operational data assimilation in the Baltic Sea. We have implemented HBM-PDAF system based on the Parallel Data Assimilation Framework (PDAF), a highly versatile and optimised parallel suit with a choice of sequential schemes originally developed at AWI, and a hydrodynamic HIROMB-BOOS Model (HBM). At initial phase, only the satellite Sea Surface Temperature (SST) Level 3 data has been assimilated. Several related aspects are discussed, including improvements of the forecast quality for both surface and subsurface fields, the estimation of ensemble-based forecast error covariance, as well as possibilities of assimilating new types of observations, such as in-situ salinity and temperature profiles, coastal altimetry, and ice concentration.

  10. Construction of human antibody gene libraries and selection of antibodies by phage display.

    PubMed

    Frenzel, André; Kügler, Jonas; Wilke, Sonja; Schirrmann, Thomas; Hust, Michael

    2014-01-01

    Antibody phage display is the most commonly used in vitro selection technology and has yielded thousands of useful antibodies for research, diagnostics, and therapy.The prerequisite for successful generation and development of human recombinant antibodies using phage display is the construction of a high-quality antibody gene library. Here, we describe the methods for the construction of human immune and naive scFv gene libraries.The success also depends on the panning strategy for the selection of binders from these libraries. In this article, we describe a panning strategy that is high-throughput compatible and allows parallel selection in microtiter plates.

  11. Quality of Education, Comparability, and Assessment Choice in Developing Countries

    ERIC Educational Resources Information Center

    Wagner, Daniel A.

    2010-01-01

    Over the past decade, international development agencies have begun to emphasize the improvement of the quality (rather than simply quantity) of education in developing countries. This new focus has been paralleled by a significant increase in the use of educational assessments as a way to measure gains and losses in quality. As this interest in…

  12. Determination of Fermi contour and spin polarization of ν = 3 2 composite fermions via ballistic commensurability measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamburov, D.; Mueed, M. A.; Jo, I.

    2014-12-01

    We report ballistic transport commensurability minima in the magnetoresistance of nu = 3/2 composite fermions (CFs). The CFs are formed in high-quality two-dimensional electron systems confined to wide GaAs quantum wells and subjected to an in-plane, unidirectional periodic potential modulation. We observe a slight asymmetry of the CF commensurability positions with respect to nu = 3/2, which we explain quantitatively by comparing three CF density models and concluding that the nu = 3/2 CFs are likely formed by the minority carriers in the upper energy spin state of the lowest Landau level. Our data also allow us to probe themore » shape and size of the CF Fermi contour. At a fixed electron density of similar or equal to 1.8x10(11) cm(-2), as the quantum well width increases from 30 to 60 nm, the CFs show increasing spin polarization. We attribute this to the enhancement of the Zeeman energy relative to the Coulomb energy in wider wells where the latter is softened because of the larger electron layer thickness. The application of an additional parallel magnetic field (B-parallel to) leads to a significant distortion of the CF Fermi contour as B-parallel to couples to the CFs' out-of-plane orbital motion. The distortion is much more severe compared to the nu = 1/2 CF case at comparable B-parallel to. Moreover, the applied B-parallel to further spin-polarizes the nu = 3/2 CFs as deduced from the positions of the commensurability minima.« less

  13. Comparison of capacitive and radio frequency resonator sensors for monitoring parallelized droplet microfluidic production.

    PubMed

    Conchouso, David; McKerricher, Garret; Arevalo, Arpys; Castro, David; Shamim, Atif; Foulds, Ian G

    2016-08-16

    Scaled-up production of microfluidic droplets, through the parallelization of hundreds of droplet generators, has received a lot of attention to bring novel multiphase microfluidics research to industrial applications. However, apart from droplet generation, other significant challenges relevant to this goal have never been discussed. Examples include monitoring systems, high-throughput processing of droplets and quality control procedures among others. In this paper, we present and compare capacitive and radio frequency (RF) resonator sensors as two candidates that can measure the dielectric properties of emulsions in microfluidic channels. By placing several of these sensors in a parallelization device, the stability of the droplet generation at different locations can be compared, and potential malfunctions can be detected. This strategy enables for the first time the monitoring of scaled-up microfluidic droplet production. Both sensors were prototyped and characterized using emulsions with droplets of 100-150 μm in diameter, which were generated in parallelization devices at water-in-oil volume fractions (φ) between 11.1% and 33.3%.Using these sensors, we were able to measure accurately increments as small as 2.4% in the water volume fraction of the emulsions. Although both methods rely on the dielectric properties of the emulsions, the main advantage of the RF resonator sensors is the fact that they can be designed to resonate at multiple frequencies of the broadband transmission line. Consequently with careful design, two or more sensors can be parallelized and read out by a single signal. Finally, a comparison between these sensors based on their sensitivity, readout cost and simplicity, and design flexibility is also discussed.

  14. An efficient and scalable analysis framework for variant extraction and refinement from population-scale DNA sequence data.

    PubMed

    Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min

    2015-06-01

    The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.

  15. Parallel Distributed Processing and Lexical-Semantic Effects in Visual Word Recognition: Are a Few Stages Necessary?

    ERIC Educational Resources Information Center

    Borowsky, Ron; Besner, Derek

    2006-01-01

    D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…

  16. Software Issues at the User Interface

    DTIC Science & Technology

    1991-05-01

    successful integration of parallel computers into mainstream scientific computing. Clearly a compiler is the most important software tool available to a...Computer Science University of Colorado Boulder, CO 80309 ABSTRACT We review software issues that are critical to the successful integration of parallel...The development of an optimizing compiler of this quality, addressing communicaton instructions as well as computational instructions is a major

  17. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    NASA Astrophysics Data System (ADS)

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai; Ng, Cho-Kuen; Rivetta, Claudio

    2017-10-01

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we present the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. The simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.

  18. Depth-varying azimuthal anisotropy in the Tohoku subduction channel

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Zhao, Dapeng

    2017-09-01

    We determine a detailed 3-D model of azimuthal anisotropy tomography of the Tohoku subduction zone from the Japan Trench outer-rise to the back-arc near the Japan Sea coast, using a large number of high-quality P and S wave arrival-time data of local earthquakes recorded by the dense seismic network on the Japan Islands. Depth-varying seismic azimuthal anisotropy is revealed in the Tohoku subduction channel. The shallow portion of the Tohoku megathrust zone (<30 km depth) generally exhibits trench-normal fast-velocity directions (FVDs) except for the source area of the 2011 Tohoku-oki earthquake (Mw 9.0) where the FVD is nearly trench-parallel, whereas the deeper portion of the megathrust zone (at depths of ∼30-50 km) mainly exhibits trench-parallel FVDs. Trench-normal FVDs are revealed in the mantle wedge beneath the volcanic front and the back-arc. The Pacific plate mainly exhibits trench-parallel FVDs, except for the top portion of the subducting Pacific slab where visible trench-normal FVDs are revealed. A qualitative tectonic model is proposed to interpret such anisotropic features, suggesting transposition of earlier fabrics in the oceanic lithosphere into subduction-induced new structures in the subduction channel.

  19. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  20. High energy micro electron beam generation using chirped laser pulse in the presence of an axial magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akou, H., E-mail: h.akou@nit.ac.ir; Hamedi, M.

    2015-10-15

    In this paper, the generation of high-quality and high-energy micro electron beam in vacuum by a chirped Gaussian laser pulse in the presence of an axial magnetic field is numerically investigated. The features of energy and angular spectra, emittances, and position distribution of electron beam are compared in two cases, i.e., in the presence and absence of an external magnetic field. The electron beam is accelerated with higher energy and qualified in spatial distribution in the presence of the magnetic field. The presence of an axial magnetic field improves electron beam spatial quality as well as its gained energy throughmore » keeping the electron motion parallel to the direction of propagation for longer distances. It has been found that a 64 μm electron bunch with about MeV initial energy becomes a 20 μm electron beam with high energy of the order of GeV, after interacting with a laser pulse in the presence of an external magnetic field.« less

  1. The language parallel Pascal and other aspects of the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  2. Continuous quality improvement interventions to improve long-term outcomes of antiretroviral therapy in women who initiated therapy during pregnancy or breastfeeding in the Democratic Republic of Congo: design of an open-label, parallel, group randomized trial.

    PubMed

    Yotebieng, Marcel; Behets, Frieda; Kawende, Bienvenu; Ravelomanana, Noro Lantoniaina Rosa; Tabala, Martine; Okitolonda, Emile W

    2017-04-26

    Despite the rapid adoption of the World Health Organization's 2013 guidelines, children continue to be infected with HIV perinatally because of sub-optimal adherence to the continuum of HIV care in maternal and child health (MCH) clinics. To achieve the UNAIDS goal of eliminating mother-to-child HIV transmission, multiple, adaptive interventions need to be implemented to improve adherence to the HIV continuum. The aim of this open label, parallel, group randomized trial is to evaluate the effectiveness of Continuous Quality Improvement (CQI) interventions implemented at facility and health district levels to improve retention in care and virological suppression through 24 months postpartum among pregnant and breastfeeding women receiving ART in MCH clinics in Kinshasa, Democratic Republic of Congo. Prior to randomization, the current monitoring and evaluation system will be strengthened to enable collection of high quality individual patient-level data necessary for timely indicators production and program outcomes monitoring to inform CQI interventions. Following randomization, in health districts randomized to CQI, quality improvement (QI) teams will be established at the district level and at MCH clinics level. For 18 months, QI teams will be brought together quarterly to identify key bottlenecks in the care delivery system using data from the monitoring system, develop an action plan to address those bottlenecks, and implement the action plan at the level of their district or clinics. If proven to be effective, CQI as designed here, could be scaled up rapidly in resource-scarce settings to accelerate progress towards the goal of an AIDS free generation. The protocol was retrospectively registered on February 7, 2017. ClinicalTrials.gov Identifier: NCT03048669 .

  3. The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Zhou, Liqing

    2015-12-01

    With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.

  4. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  5. Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, C; Quinlan, D J; Willcock, J J

    2008-12-12

    Automatic introduction of OpenMP for sequential applications has attracted significant attention recently because of the proliferation of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high-level abstractions, such as STL containers and complex user-defined types, are largely ignored due to the lack of research compilers that are readily able to recognize high-level object-oriented abstractions and leverage their associated semantics. In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructuremore » which preserves the high-level abstractions and gives us access to their semantics. Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses. Those kernels include an array-base computation loop, a loop with task-level parallelism, and a domain-specific tree traversal. Our work extends the applicability of automatic parallelization to modern applications using high-level abstractions and exposes more opportunities to take advantage of multicore processors.« less

  6. Concentric Rings K-Space Trajectory for Hyperpolarized 13C MR Spectroscopic Imaging

    PubMed Central

    Jiang, Wenwen; Lustig, Michael; Larson, Peder E.Z.

    2014-01-01

    Purpose To develop a robust and rapid imaging technique for hyperpolarized 13C MR Spectroscopic Imaging (MRSI) and investigate its performance. Methods A concentric rings readout trajectory with constant angular velocity is proposed for hyperpolarized 13C spectroscopic imaging and its properties are analyzed. Quantitative analyses of design tradeoffs are presented for several imaging scenarios. The first application of concentric rings on 13C phantoms and in vivo animal hyperpolarized 13C MRSI studies were performed to demonstrate the feasibility of the proposed method. Finally, a parallel imaging accelerated concentric rings study is presented. Results The concentric rings MRSI trajectory has the advantages of acquisition timesaving compared to echo-planar spectroscopic imaging (EPSI). It provides sufficient spectral bandwidth with relatively high SNR efficiency compared to EPSI and spiral techniques. Phantom and in vivo animal studies showed good image quality with half the scan time and reduced pulsatile flow artifacts compared to EPSI. Parallel imaging accelerated concentric rings showed advantages over Cartesian sampling in g-factor simulations and demonstrated aliasing-free image quality in a hyperpolarized 13C in vivo study. Conclusion The concentric rings trajectory is a robust and rapid imaging technique that fits very well with the speed, bandwidth, and resolution requirements of hyperpolarized 13C MRSI. PMID:25533653

  7. A novel approach combining self-organizing map and parallel factor analysis for monitoring water quality of watersheds under non-point source pollution

    PubMed Central

    Zhang, Yixiang; Liang, Xinqiang; Wang, Zhibo; Xu, Lixian

    2015-01-01

    High content of organic matter in the downstream of watersheds underscored the severity of non-point source (NPS) pollution. The major objectives of this study were to characterize and quantify dissolved organic matter (DOM) in watersheds affected by NPS pollution, and to apply self-organizing map (SOM) and parallel factor analysis (PARAFAC) to assess fluorescence properties as proxy indicators for NPS pollution and labor-intensive routine water quality indicators. Water from upstreams and downstreams was sampled to measure dissolved organic carbon (DOC) concentrations and excitation-emission matrix (EEM). Five fluorescence components were modeled with PARAFAC. The regression analysis between PARAFAC intensities (Fmax) and raw EEM measurements indicated that several raw fluorescence measurements at target excitation-emission wavelength region could provide similar DOM information to massive EEM measurements combined with PARAFAC. Regression analysis between DOC concentration and raw EEM measurements suggested that some regions in raw EEM could be used as surrogates for labor-intensive routine indicators. SOM can be used to visualize the occurrence of pollution. Relationship between DOC concentration and PARAFAC components analyzed with SOM suggested that PARAFAC component 2 might be the major part of bulk DOC and could be recognized as a proxy indicator to predict the DOC concentration. PMID:26526140

  8. The effect of earthquake on architecture geometry with non-parallel system irregularity configuration

    NASA Astrophysics Data System (ADS)

    Teddy, Livian; Hardiman, Gagoek; Nuroji; Tudjono, Sri

    2017-12-01

    Indonesia is an area prone to earthquake that may cause casualties and damage to buildings. The fatalities or the injured are not largely caused by the earthquake, but by building collapse. The collapse of the building is resulted from the building behaviour against the earthquake, and it depends on many factors, such as architectural design, geometry configuration of structural elements in horizontal and vertical plans, earthquake zone, geographical location (distance to earthquake center), soil type, material quality, and construction quality. One of the geometry configurations that may lead to the collapse of the building is irregular configuration of non-parallel system. In accordance with FEMA-451B, irregular configuration in non-parallel system is defined to have existed if the vertical lateral force-retaining elements are neither parallel nor symmetric with main orthogonal axes of the earthquake-retaining axis system. Such configuration may lead to torque, diagonal translation and local damage to buildings. It does not mean that non-parallel irregular configuration should not be formed on architectural design; however the designer must know the consequence of earthquake behaviour against buildings with irregular configuration of non-parallel system. The present research has the objective to identify earthquake behaviour in architectural geometry with irregular configuration of non-parallel system. The present research was quantitative with simulation experimental method. It consisted of 5 models, where architectural data and model structure data were inputted and analyzed using the software SAP2000 in order to find out its performance, and ETAB2015 to determine the eccentricity occurred. The output of the software analysis was tabulated, graphed, compared and analyzed with relevant theories. For areas of strong earthquake zones, avoid designing buildings which wholly form irregular configuration of non-parallel system. If it is inevitable to design a building with building parts containing irregular configuration of non-parallel system, make it more rigid by forming a triangle module, and use the formula.A good collaboration is needed between architects and structural experts in creating earthquake architecture.

  9. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    NASA Astrophysics Data System (ADS)

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  10. Efficient computation of hashes

    NASA Astrophysics Data System (ADS)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.

    2014-06-01

    The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.

  11. Parallel and Distributed Systems for Probabilistic Reasoning

    DTIC Science & Technology

    2012-12-01

    work at CMU I had the opportunity to work with Andreas Krause on Gaussian process models for signal quality estimation in wireless sensor networks ...we reviewed the natural parallelization of the belief propagation algorithm using the synchronous schedule and demonstrated both theoretically and...problem is that the power-law sparsity structure, commonly found in graphs derived from natural phenomena (e.g., social networks and the web

  12. Robot-assisted ultrasound imaging: overview and development of a parallel telerobotic system.

    PubMed

    Monfaredi, Reza; Wilson, Emmanuel; Azizi Koutenaei, Bamshad; Labrecque, Brendan; Leroy, Kristen; Goldie, James; Louis, Eric; Swerdlow, Daniel; Cleary, Kevin

    2015-02-01

    Ultrasound imaging is frequently used in medicine. The quality of ultrasound images is often dependent on the skill of the sonographer. Several researchers have proposed robotic systems to aid in ultrasound image acquisition. In this paper we first provide a short overview of robot-assisted ultrasound imaging (US). We categorize robot-assisted US imaging systems into three approaches: autonomous US imaging, teleoperated US imaging, and human-robot cooperation. For each approach several systems are introduced and briefly discussed. We then describe a compact six degree of freedom parallel mechanism telerobotic system for ultrasound imaging developed by our research team. The long-term goal of this work is to enable remote ultrasound scanning through teleoperation. This parallel mechanism allows for both translation and rotation of an ultrasound probe mounted on the top plate along with force control. Our experimental results confirmed good mechanical system performance with a positioning error of < 1 mm. Phantom experiments by a radiologist showed promising results with good image quality.

  13. Multiple plant hormones and cell wall metabolism regulate apple fruit maturation patterns and texture attributes

    USDA-ARS?s Scientific Manuscript database

    Molecular events regulating apple fruit ripening and sensory quality are largely unknown. Such knowledge is essential for genomic-assisted apple breeding and postharvest quality management. In this study, a parallel transcriptome profile analysis, scanning electron microscopic (SEM) examination and...

  14. Quality control/quality assurance testing for joint density and segregation of asphalt mixtures : tech transfer summary.

    DOT National Transportation Integrated Search

    2013-04-01

    A longitudinal joint is the interface between two adjacent and parallel hot-mix asphalt (HMA) mats. Inadequate joint construction can lead to a location where water can penetrate the pavement layers and reduce the structural support of the underlying...

  15. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  16. Bilingual parallel programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach providesmore » and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.« less

  17. [Sampling in quality control of medicinal materials-A case of Epimedium].

    PubMed

    Wang, Chuanyi; Cao, Jinyi; Liang, Yun; Huang, Wenhua; Guo, Baolin

    2009-04-01

    To investigate the effect of the different individual number of sampling on the assay results of the medicinal materials. Epimedium pubescens and E. brevicornu were used as samples. The 6 sampling levels were formulated as 1 individual, 5, 10, 20, 30, 50 individuals mix, each level with 3 parallels and 1 individual level5 parallels. The contents of epimedin C and icariin, and the peak areas of epimedin A, epimedin B, rhamnosyl icarisid II and icarisid II in all samples were analyzed by HPLC. The variation degree varied with species and chemical constituents, but the RSD and the deviation from the true value decreased with the increase of individual number on the same chemical constituent. The sampling number should be more than 10 individuals in quality control of Epimedium, and 50 or more individuals would be better for representing the quality of medicinal materials.

  18. Fast I/O for Massively Parallel Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew T.

    1996-01-01

    The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.

  19. Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi

    1989-01-01

    Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.

  20. Parallel Implementation of a High Order Implicit Collocation Method for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Halem, Milton (Technical Monitor)

    2000-01-01

    We combine a high order compact finite difference approximation and collocation techniques to numerically solve the two dimensional heat equation. The resulting method is implicit arid can be parallelized with a strategy that allows parallelization across both time and space. We compare the parallel implementation of the new method with a classical implicit method, namely the Crank-Nicolson method, where the parallelization is done across space only. Numerical experiments are carried out on the SGI Origin 2000.

  1. Adapting high-level language programs for parallel processing using data flow

    NASA Technical Reports Server (NTRS)

    Standley, Hilda M.

    1988-01-01

    EASY-FLOW, a very high-level data flow language, is introduced for the purpose of adapting programs written in a conventional high-level language to a parallel environment. The level of parallelism provided is of the large-grained variety in which parallel activities take place between subprograms or processes. A program written in EASY-FLOW is a set of subprogram calls as units, structured by iteration, branching, and distribution constructs. A data flow graph may be deduced from an EASY-FLOW program.

  2. Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He

    1997-01-01

    Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.

  3. Coaching in the AP Classroom

    ERIC Educational Resources Information Center

    Fornaciari, Jim

    2013-01-01

    Many parallels exist between quality coaches and quality classroom teachers--especially AP teachers, who often feel the pressure to produce positive test results. Having developed a series of techniques and strategies for building a team-oriented winning culture on the field, Jim Fornaciari writes about how he adapted those methods to work in the…

  4. Quality Content in Distance Education

    ERIC Educational Resources Information Center

    Yildiz, Ezgi Pelin; Isman, Aytekin

    2016-01-01

    In parallel with technological advances in today's world of education activities can be conducted without the constraints of time and space. One of the most important of these activities is distance education. The success of the distance education is possible with content quality. The proliferation of e-learning environment has brought a need for…

  5. 15 CFR 922.132 - Prohibited or otherwise regulated activities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 35.92222 N latitude parallel (coastal reference point: Beach access stairway at south Sand Dollar... other matter that subsequently enters the Sanctuary and injures a Sanctuary resource or quality, except... and qualities. The prohibitions in paragraphs (a)(2) through (12) of this section do not apply to...

  6. Design and Performance of a 1 ms High-Speed Vision Chip with 3D-Stacked 140 GOPS Column-Parallel PEs †.

    PubMed

    Nose, Atsushi; Yamazaki, Tomohiro; Katayama, Hironobu; Uehara, Shuji; Kobayashi, Masatsugu; Shida, Sayaka; Odahara, Masaki; Takamiya, Kenichi; Matsumoto, Shizunori; Miyashita, Leo; Watanabe, Yoshihiro; Izawa, Takashi; Muramatsu, Yoshinori; Nitta, Yoshikazu; Ishikawa, Masatoshi

    2018-04-24

    We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.

  7. High-contrast imaging in the cloud with klipReduce and Findr

    NASA Astrophysics Data System (ADS)

    Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.

    2016-08-01

    Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.

  8. X-ray computed tomography comparison of individual and parallel assembled commercial lithium iron phosphate batteries at end of life after high rate cycling

    NASA Astrophysics Data System (ADS)

    Carter, Rachel; Huhman, Brett; Love, Corey T.; Zenyuk, Iryna V.

    2018-03-01

    X-ray computed tomography (X-ray CT) across multiple length scales is utilized for the first time to investigate the physical abuse of high C-rate pulsed discharge on cells wired individually and in parallel.. Manufactured lithium iron phosphate cells boasting high rate capability were pulse power tested in both wiring conditions with high discharge currents of 10C for a high number of cycles (up to 1200) until end of life (<80% of initial discharge capacity retained). The parallel assembly reached end of life more rapidly for reasons unknown prior to CT investigations. The investigation revealed evidence of overdischarge in the most degraded cell from the parallel assembly, compared to more traditional failure in the individual cell. The parallel-wired cell exhibited dissolution of copper from the anode current collector and subsequent deposition throughout the separator near the cathode of the cell. This overdischarge-induced copper deposition, notably impossible to confirm with other state of health (SOH) monitoring methods, is diagnosed using CT by rendering the interior current collector without harm or alteration to the active materials. Correlation of CT observations to the electrochemical pulse data from the parallel-wired cells reveals the risk of parallel wiring during high C-rate pulse discharge.

  9. Structure of scintillations in Neptune's occultation shadow

    NASA Technical Reports Server (NTRS)

    Hubbard, W. B.; Lellouch, Emmanuel; Sicardy, Bruno; Brahic, Andre; Vilas, Faith

    1988-01-01

    An exceptionally high-quality data set from a Neptune occultation is used here to derive a number of new results about the statistical properties of the fluctuations of the intensity distribution in various parts of Neptune's occultation shadow. An approximate numerical ray-tracing model which successfully accounts for many of the qualitative aspects of the observed intensity fluctuation distribution is introduced. Strong refractive scintillation is simulated by including the effects of 'turbulence' with projected atmospheric properties allowed to vary in both the direction perpendicular and parallel to the limb, and an explicit two-dimensional picture of a typical intensity distribution throughout an occulting planet's shadow is presented. The results confirm the existence of highly anisotropic turbulence.

  10. Automation effects in a stereotypical multiloop manual control system. [for aircraft

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Mcnally, B. D.

    1984-01-01

    The increasing reliance of state-of-the art, high performance aircraft on high authority stability and command augmentation systems, in order to obtain satisfactory performance and handling qualities, has made critical the achievement of a better understanding of human capabilities, limitations, and preferences during interactions with complex dynamic systems that involve task allocation between man and machine. An analytical and experimental study has been undertaken to investigate human interaction with a simple, multiloop dynamic system in which human activity was systematically varied by changing the levels of automation. Task definition has led to a control loop structure which parallels that for any multiloop manual control system, and may therefore be considered a stereotype.

  11. Determination of accurate 1H positions of an alanine tripeptide with anti-parallel and parallel β-sheet structures by high resolution 1H solid state NMR and GIPAW chemical shift calculation.

    PubMed

    Yazawa, Koji; Suzuki, Furitsu; Nishiyama, Yusuke; Ohata, Takuya; Aoki, Akihiro; Nishimura, Katsuyuki; Kaji, Hironori; Shimizu, Tadashi; Asakura, Tetsuo

    2012-11-25

    The accurate (1)H positions of alanine tripeptide, A(3), with anti-parallel and parallel β-sheet structures could be determined by highly resolved (1)H DQMAS solid-state NMR spectra and (1)H chemical shift calculation with gauge-including projector augmented wave calculations.

  12. On the wall-normal velocity of the compressible boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1991-01-01

    Numerical methods for the compressible boundary-layer equations are facilitated by transformation from the physical (x,y) plane to a computational (xi,eta) plane in which the evolution of the flow is 'slow' in the time-like xi direction. The commonly used Levy-Lees transformation results in a computationally well-behaved problem for a wide class of non-similar boundary-layer flows, but it complicates interpretation of the solution in physical space. Specifically, the transformation is inherently nonlinear, and the physical wall-normal velocity is transformed out of the problem and is not readily recovered. In light of recent research which shows mean-flow non-parallelism to significantly influence the stability of high-speed compressible flows, the contribution of the wall-normal velocity in the analysis of stability should not be routinely neglected. Conventional methods extract the wall-normal velocity in physical space from the continuity equation, using finite-difference techniques and interpolation procedures. The present spectrally-accurate method extracts the wall-normal velocity directly from the transformation itself, without interpolation, leaving the continuity equation free as a check on the quality of the solution. The present method for recovering wall-normal velocity, when used in conjunction with a highly-accurate spectral collocation method for solving the compressible boundary-layer equations, results in a discrete solution which is extraordinarily smooth and accurate, and which satisfies the continuity equation nearly to machine precision. These qualities make the method well suited to the computation of the non-parallel mean flows needed by spatial direct numerical simulations (DNS) and parabolized stability equation (PSE) approaches to the analysis of stability.

  13. Isoflavones, calcium, vitamin D and inulin improve quality of life, sexual function, body composition and metabolic parameters in menopausal women: result from a prospective, randomized, placebo-controlled, parallel-group study.

    PubMed

    Vitale, Salvatore Giovanni; Caruso, Salvatore; Rapisarda, Agnese Maria Chiara; Cianci, Stefano; Cianci, Antonio

    2018-03-01

    Menopause results in metabolic changes that contribute to increase risk of cardiovascular diseases: increase in low density lipoprotein (LDL) and triglycerides and decrease in high density lipoprotein (HDL), weight gain are associated with a correspondent increase in incidence of hypertension and diabetes. The aim of this study was to evaluate the effect of a preparation of isoflavones, calcium vitamin D and inulin in menopausal women. We performed a prospective, randomized, placebo-controlled, parallel-group study. A total of 50 patients were randomized to receive either oral preparations of isoflavones (40 mg), calcium (500 mg) vitamin D (300 UI) and inulin (3 g) or placebo (control group). Pre- and post-treatment assessment of quality of life and sexual function were performed through Menopause-Specific Quality of Life Questionnaire (MENQOL) and Female Sexual Function Index (FSFI); evaluations of anthropometric indicators, body composition through bioelectrical impedance analyser, lumbar spine and proximal femur T-score and lipid profile were performed. After 12 months, a significant reduction in MENQOL vasomotor, physical and sexual domain scores ( p < 0.05) and a significant increase in all FSFI domain scores ( p < 0.05) were observed in treatment group. Laboratory tests showed significant increase in serum levels of HDL ( p < 0.05). No significant changes of lumbar spine and femur neck T-score ( p > 0.05) were found in the same group. According to our data analysis, isoflavones, calcium, vitamin D and inulin may exert favourable effects on menopausal symptoms and signs.

  14. Detergent/Nanodisc Screening for High-Resolution NMR Studies of an Integral Membrane Protein Containing a Cytoplasmic Domain

    PubMed Central

    Maslennikov, Innokentiy; Choe, Senyon; Riek, Roland

    2013-01-01

    Because membrane proteins need to be extracted from their natural environment and reconstituted in artificial milieus for the 3D structure determination by X-ray crystallography or NMR, the search for membrane mimetic that conserve the native structure and functional activities remains challenging. We demonstrate here a detergent/nanodisc screening study by NMR of the bacterial α-helical membrane protein YgaP containing a cytoplasmic rhodanese domain. The analysis of 2D [15N,1H]-TROSY spectra shows that only a careful usage of low amounts of mixed detergents did not perturb the cytoplasmic domain while solubilizing in parallel the transmembrane segments with good spectral quality. In contrast, the incorporation of YgaP into nanodiscs appeared to be straightforward and yielded a surprisingly high quality [15N,1H]-TROSY spectrum opening an avenue for the structural studies of a helical membrane protein in a bilayer system by solution state NMR. PMID:23349867

  15. A New Dual-purpose Quality Control Dosimetry Protocol for Diagnostic Reference-level Determination in Computed Tomography.

    PubMed

    Sohrabi, Mehdi; Parsi, Masoumeh; Sina, Sedigheh

    2018-05-17

    A diagnostic reference level is an advisory dose level set by a regulatory authority in a country as an efficient criterion for protection of patients from unwanted medical exposure. In computed tomography, the direct dose measurement and data collection methods are commonly applied for determination of diagnostic reference levels. Recently, a new quality-control-based dose survey method was proposed by the authors to simplify the diagnostic reference-level determination using a retrospective quality control database usually available at a regulatory authority in a country. In line with such a development, a prospective dual-purpose quality control dosimetry protocol is proposed for determination of diagnostic reference levels in a country, which can be simply applied by quality control service providers. This new proposed method was applied to five computed tomography scanners in Shiraz, Iran, and diagnostic reference levels for head, abdomen/pelvis, sinus, chest, and lumbar spine examinations were determined. The results were compared to those obtained by the data collection and quality-control-based dose survey methods, carried out in parallel in this study, and were found to agree well within approximately 6%. This is highly acceptable for quality-control-based methods according to International Atomic Energy Agency tolerance levels (±20%).

  16. Accelerating research into bio-based FDCA-polyesters by using small scale parallel film reactors.

    PubMed

    Gruter, Gert-Jan M; Sipos, Laszlo; Adrianus Dam, Matheus

    2012-02-01

    High Throughput experimentation has been well established as a tool in early stage catalyst development and catalyst and process scale-up today. One of the more challenging areas of catalytic research is polymer catalysis. The main difference with most non-polymer catalytic conversions is the fact that the product is not a well defined molecule and the catalytic performance cannot be easily expressed only in terms of catalyst activity and selectivity. In polymerization reactions, polymer chains are formed that can have various lengths (resulting in a molecular weight distribution rather than a defined molecular weight), that can have different compositions (when random or block co-polymers are produced), that can have cross-linking (often significantly affecting physical properties), that can have different endgroups (often affecting subsequent processing steps) and several other variations. In addition, for polyolefins, mass and heat transfer, oxygen and moisture sensitivity, stereoregularity and many other intrinsic features make relevant high throughput screening in this field an incredible challenge. For polycondensation reactions performed in the melt often the viscosity becomes already high at modest molecular weights, which greatly influences mass transfer of the condensation product (often water or methanol). When reactions become mass transfer limited, catalyst performance comparison is often no longer relevant. This however does not mean that relevant experiments for these application areas cannot be performed on small scale. Relevant catalyst screening experiments for polycondensation reactions can be performed in very efficient small scale parallel equipment. Both transesterification and polycondensation as well as post condensation through solid-stating in parallel equipment have been developed. Next to polymer synthesis, polymer characterization also needs to be accelerated without making concessions to quality in order to draw relevant conclusions.

  17. Automated quality control in a file-based broadcasting workflow

    NASA Astrophysics Data System (ADS)

    Zhang, Lina

    2014-04-01

    Benefit from the development of information and internet technologies, television broadcasting is transforming from inefficient tape-based production and distribution to integrated file-based workflows. However, no matter how many changes have took place, successful broadcasting still depends on the ability to deliver a consistent high quality signal to the audiences. After the transition from tape to file, traditional methods of manual quality control (QC) become inadequate, subjective, and inefficient. Based on China Central Television's full file-based workflow in the new site, this paper introduces an automated quality control test system for accurate detection of hidden troubles in media contents. It discusses the system framework and workflow control when the automated QC is added. It puts forward a QC criterion and brings forth a QC software followed this criterion. It also does some experiments on QC speed by adopting parallel processing and distributed computing. The performance of the test system shows that the adoption of automated QC can make the production effective and efficient, and help the station to achieve a competitive advantage in the media market.

  18. Institutional Computing Executive Group Review of Multi-programmatic & Institutional Computing, Fiscal Year 2005 and 2006

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, S; Rotman, D; Schwegler, E

    The Institutional Computing Executive Group (ICEG) review of FY05-06 Multiprogrammatic and Institutional Computing (M and IC) activities is presented in the attached report. In summary, we find that the M and IC staff does an outstanding job of acquiring and supporting a wide range of institutional computing resources to meet the programmatic and scientific goals of LLNL. The responsiveness and high quality of support given to users and the programs investing in M and IC reflects the dedication and skill of the M and IC staff. M and IC has successfully managed serial capacity, parallel capacity, and capability computing resources.more » Serial capacity computing supports a wide range of scientific projects which require access to a few high performance processors within a shared memory computer. Parallel capacity computing supports scientific projects that require a moderate number of processors (up to roughly 1000) on a parallel computer. Capability computing supports parallel jobs that push the limits of simulation science. M and IC has worked closely with Stockpile Stewardship, and together they have made LLNL a premier institution for computational and simulation science. Such a standing is vital to the continued success of laboratory science programs and to the recruitment and retention of top scientists. This report provides recommendations to build on M and IC's accomplishments and improve simulation capabilities at LLNL. We recommend that institution fully fund (1) operation of the atlas cluster purchased in FY06 to support a few large projects; (2) operation of the thunder and zeus clusters to enable 'mid-range' parallel capacity simulations during normal operation and a limited number of large simulations during dedicated application time; (3) operation of the new yana cluster to support a wide range of serial capacity simulations; (4) improvements to the reliability and performance of the Lustre parallel file system; (5) support for the new GDO petabyte-class storage facility on the green network for use in data intensive external collaborations; and (6) continued support for visualization and other methods for analyzing large simulations. We also recommend that M and IC begin planning in FY07 for the next upgrade of its parallel clusters. LLNL investments in M and IC have resulted in a world-class simulation capability leading to innovative science. We thank the LLNL management for its continued support and thank the M and IC staff for its vision and dedicated efforts to make it all happen.« less

  19. Integration experiences and performance studies of A COTS parallel archive systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Bary

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and lessmore » robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of future archival storage systems.« less

  20. Integration experiments and performance studies of a COTS parallel archive system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hsing-bung; Scott, Cody; Grider, Gary

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching andmore » less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address requirements of future archival storage systems.« less

  1. Parallel MR imaging: a user's guide.

    PubMed

    Glockner, James F; Hu, Houchun H; Stanley, David W; Angelos, Lisa; King, Kevin

    2005-01-01

    Parallel imaging is a recently developed family of techniques that take advantage of the spatial information inherent in phased-array radiofrequency coils to reduce acquisition times in magnetic resonance imaging. In parallel imaging, the number of sampled k-space lines is reduced, often by a factor of two or greater, thereby significantly shortening the acquisition time. Parallel imaging techniques have only recently become commercially available, and the wide range of clinical applications is just beginning to be explored. The potential clinical applications primarily involve reduction in acquisition time, improved spatial resolution, or a combination of the two. Improvements in image quality can be achieved by reducing the echo train lengths of fast spin-echo and single-shot fast spin-echo sequences. Parallel imaging is particularly attractive for cardiac and vascular applications and will likely prove valuable as 3-T body and cardiovascular imaging becomes part of standard clinical practice. Limitations of parallel imaging include reduced signal-to-noise ratio and reconstruction artifacts. It is important to consider these limitations when deciding when to use these techniques. (c) RSNA, 2005.

  2. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    NASA Technical Reports Server (NTRS)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  3. Coil Compression for Accelerated Imaging with Cartesian Sampling

    PubMed Central

    Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael

    2012-01-01

    MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589

  4. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Frumkin, Michael; Hribar, Michelle; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but the task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study,potentials of applying some of the techniques to realistic aerospace applications will be presented

  5. MPI, HPF or OpenMP: A Study with the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, H.; Frumkin, M.; Hribar, M.; Waheed, A.; Yan, J.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Porting applications to new high performance parallel and distributed platforms is a challenging task. Writing parallel code by hand is time consuming and costly, but this task can be simplified by high level languages and would even better be automated by parallelizing tools and compilers. The definition of HPF (High Performance Fortran, based on data parallel model) and OpenMP (based on shared memory parallel model) standards has offered great opportunity in this respect. Both provide simple and clear interfaces to language like FORTRAN and simplify many tedious tasks encountered in writing message passing programs. In our study, we implemented the parallel versions of the NAS Benchmarks with HPF and OpenMP directives. Comparison of their performance with the MPI implementation and pros and cons of different approaches will be discussed along with experience of using computer-aided tools to help parallelize these benchmarks. Based on the study, potentials of applying some of the techniques to realistic aerospace applications will be presented.

  6. Adolescents' unhealthy eating habits are associated with meal skipping.

    PubMed

    Rodrigues, Paulo Rogério Melo; Luiz, Ronir Raggio; Monteiro, Luana Silva; Ferreira, Márcia Gonçalves; Gonçalves-Silva, Regina Maria Veras; Pereira, Rosangela Alves

    2017-10-01

    Meal consumption and diet quality are important for healthy development during adolescence. The aim of this study was to determine the association between meal habits and diet quality in Brazilian adolescents. A school-based, cross-sectional study was conducted in 2008 with a probabilistic sample of adolescents ages 14 to 19 y (N = 1139) from high schools in central-western Brazil. Consumption of breakfast, morning snack, lunch, afternoon snack, and dinner was assessed to evaluate adolescents' meal profile. The Brazilian Healthy Eating Index-Revised (BHEI-R) was calculated to evaluate diet quality. The association between meal profile and BHEI-R (global estimates and components) was assessed using multivariate linear regression models. Diet was characterized by unhealthy eating: a low consumption of fruits, vegetables, and milk/dairy, and a high consumption of fats and sodium. An unsatisfactory meal profile was observed in 14% of adolescents, whereas daily consumption of breakfast, lunch, and dinner was reported by 47%, 78%, and 52% of adolescents, respectively. Meal profile was positively associated with diet quality. Daily consumption of breakfast was associated with higher BHEI-R scores, lower sodium intake, and greater consumption of fruits and milk/dairy. Daily consumption of lunch was associated with greater consumption of vegetables and "meats, eggs, and legumes," whereas consumption of dinner was associated with an increased consumption of "whole fruits." This study showed a parallelism between daily consumption of meals with healthier eating and greater adherence to traditional Brazilian food habits. Skipping meals was associated with a low-quality diet, especially concerning to the low consumption of fruits and vegetables and a high intake of sodium and calories from solid fats, added sugars, and alcoholic beverages. Therefore, the adoption of regular meal habits may help adolescents improve their diet quality. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  8. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    DOE PAGES

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai; ...

    2017-10-10

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we presentmore » the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. Furthermore, the simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.« less

  9. A novel laparoscopic grasper with two parallel jaws capable of extracting the mechanical behaviour of soft tissues.

    PubMed

    Nazarynasab, Dariush; Farahmand, Farzam; Mirbagheri, Alireza; Afshari, Elnaz

    2017-07-01

    Data related to force-deformation behaviour of soft tissue plays an important role in medical/surgical applications such as realistically modelling mechanical behaviour of soft tissue as well as minimally invasive surgery (MIS) and medical diagnosis. While the mechanical behaviour of soft tissue is very complex due to its different constitutive components, some issues increase its complexity like behavioural changes between the live and dead tissues. Indeed, an adequate quantitative description of mechanical behaviour of soft tissues requires high quality in vivo experimental data to be obtained and analysed. This paper describes a novel laparoscopic grasper with two parallel jaws capable of obtaining compressive force-deformation data related to mechanical behaviour of soft tissues. This new laparoscopic grasper includes four sections as mechanical hardware, sensory part, electrical/electronical part and data storage part. By considering a unique design for mechanical hardware, data recording conditions will be close to unconfined-compression-test conditions; so obtained data can be properly used in extracting the mechanical behaviour of soft tissues. Also, the other distinguishing feature of this new system is its applicability during different laparoscopic surgeries and subsequently obtaining in vivo data. However, more preclinical examinations are needed to evaluate the practicality of the novel laparoscopic grasper with two parallel jaws.

  10. Flexbar 3.0 - SIMD and multicore parallelization.

    PubMed

    Roehr, Johannes T; Dieterich, Christoph; Reinert, Knut

    2017-09-15

    High-throughput sequencing machines can process many samples in a single run. For Illumina systems, sequencing reads are barcoded with an additional DNA tag that is contained in the respective sequencing adapters. The recognition of barcode and adapter sequences is hence commonly needed for the analysis of next-generation sequencing data. Flexbar performs demultiplexing based on barcodes and adapter trimming for such data. The massive amounts of data generated on modern sequencing machines demand that this preprocessing is done as efficiently as possible. We present Flexbar 3.0, the successor of the popular program Flexbar. It employs now twofold parallelism: multi-threading and additionally SIMD vectorization. Both types of parallelism are used to speed-up the computation of pair-wise sequence alignments, which are used for the detection of barcodes and adapters. Furthermore, new features were included to cover a wide range of applications. We evaluated the performance of Flexbar based on a simulated sequencing dataset. Our program outcompetes other tools in terms of speed and is among the best tools in the presented quality benchmark. https://github.com/seqan/flexbar. johannes.roehr@fu-berlin.de or knut.reinert@fu-berlin.de. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. 3D multiphysics modeling of superconducting cavities with a massively parallel simulation suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kononenko, Oleksiy; Adolphsen, Chris; Li, Zenghai

    Radiofrequency cavities based on superconducting technology are widely used in particle accelerators for various applications. The cavities usually have high quality factors and hence narrow bandwidths, so the field stability is sensitive to detuning from the Lorentz force and external loads, including vibrations and helium pressure variations. If not properly controlled, the detuning can result in a serious performance degradation of a superconducting accelerator, so an understanding of the underlying detuning mechanisms can be very helpful. Recent advances in the simulation suite ace3p have enabled realistic multiphysics characterization of such complex accelerator systems on supercomputers. In this paper, we presentmore » the new capabilities in ace3p for large-scale 3D multiphysics modeling of superconducting cavities, in particular, a parallel eigensolver for determining mechanical resonances, a parallel harmonic response solver to calculate the response of a cavity to external vibrations, and a numerical procedure to decompose mechanical loads, such as from the Lorentz force or piezoactuators, into the corresponding mechanical modes. These capabilities have been used to do an extensive rf-mechanical analysis of dressed TESLA-type superconducting cavities. Furthermore, the simulation results and their implications for the operational stability of the Linac Coherent Light Source-II are discussed.« less

  12. Flow cytometry for enrichment and titration in massively parallel DNA sequencing

    PubMed Central

    Sandberg, Julia; Ståhl, Patrik L.; Ahmadian, Afshin; Bjursell, Magnus K.; Lundeberg, Joakim

    2009-01-01

    Massively parallel DNA sequencing is revolutionizing genomics research throughout the life sciences. However, the reagent costs and labor requirements in current sequencing protocols are still substantial, although improvements are continuously being made. Here, we demonstrate an effective alternative to existing sample titration protocols for the Roche/454 system using Fluorescence Activated Cell Sorting (FACS) technology to determine the optimal DNA-to-bead ratio prior to large-scale sequencing. Our method, which eliminates the need for the costly pilot sequencing of samples during titration is capable of rapidly providing accurate DNA-to-bead ratios that are not biased by the quantification and sedimentation steps included in current protocols. Moreover, we demonstrate that FACS sorting can be readily used to highly enrich fractions of beads carrying template DNA, with near total elimination of empty beads and no downstream sacrifice of DNA sequencing quality. Automated enrichment by FACS is a simple approach to obtain pure samples for bead-based sequencing systems, and offers an efficient, low-cost alternative to current enrichment protocols. PMID:19304748

  13. Durham extremely large telescope adaptive optics simulation platform.

    PubMed

    Basden, Alastair; Butterley, Timothy; Myers, Richard; Wilson, Richard

    2007-03-01

    Adaptive optics systems are essential on all large telescopes for which image quality is important. These are complex systems with many design parameters requiring optimization before good performance can be achieved. The simulation of adaptive optics systems is therefore necessary to categorize the expected performance. We describe an adaptive optics simulation platform, developed at Durham University, which can be used to simulate adaptive optics systems on the largest proposed future extremely large telescopes as well as on current systems. This platform is modular, object oriented, and has the benefit of hardware application acceleration that can be used to improve the simulation performance, essential for ensuring that the run time of a given simulation is acceptable. The simulation platform described here can be highly parallelized using parallelization techniques suited for adaptive optics simulation, while still offering the user complete control while the simulation is running. The results from the simulation of a ground layer adaptive optics system are provided as an example to demonstrate the flexibility of this simulation platform.

  14. GPU accelerated particle visualization with Splotch

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.

    2014-07-01

    Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.

  15. Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks.

    PubMed

    Mulder, Samuel A; Wunsch, Donald C

    2003-01-01

    The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.

  16. The Debate on Learning Assessments in Developing Countries

    ERIC Educational Resources Information Center

    Wagner, Daniel A.; Lockheed, Marlaine; Mullis, Ina; Martin, Michael O.; Kanjee, Anil; Gove, Amber; Dowd, Amy Jo

    2012-01-01

    Over the past decade, international and national education agencies have begun to emphasize the improvement of the quality (rather than quantity) of education in developing countries. This trend has been paralleled by a significant increase in the use of educational assessments as a way to measure gains and losses in quality of learning. As…

  17. A Quantitative Quality Control Model for Parallel and Distributed Crowdsourcing Tasks

    ERIC Educational Resources Information Center

    Zhu, Shaojian

    2014-01-01

    Crowdsourcing is an emerging research area that has experienced rapid growth in the past few years. Although crowdsourcing has demonstrated its potential in numerous domains, several key challenges continue to hinder its application. One of the major challenges is quality control. How can crowdsourcing requesters effectively control the quality…

  18. When the lowest energy does not induce native structures: parallel minimization of multi-energy values by hybridizing searching intelligences.

    PubMed

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise.

  19. When the Lowest Energy Does Not Induce Native Structures: Parallel Minimization of Multi-Energy Values by Hybridizing Searching Intelligences

    PubMed Central

    Lü, Qiang; Xia, Xiao-Yan; Chen, Rong; Miao, Da-Jun; Chen, Sha-Sha; Quan, Li-Jun; Li, Hai-Ou

    2012-01-01

    Background Protein structure prediction (PSP), which is usually modeled as a computational optimization problem, remains one of the biggest challenges in computational biology. PSP encounters two difficult obstacles: the inaccurate energy function problem and the searching problem. Even if the lowest energy has been luckily found by the searching procedure, the correct protein structures are not guaranteed to obtain. Results A general parallel metaheuristic approach is presented to tackle the above two problems. Multi-energy functions are employed to simultaneously guide the parallel searching threads. Searching trajectories are in fact controlled by the parameters of heuristic algorithms. The parallel approach allows the parameters to be perturbed during the searching threads are running in parallel, while each thread is searching the lowest energy value determined by an individual energy function. By hybridizing the intelligences of parallel ant colonies and Monte Carlo Metropolis search, this paper demonstrates an implementation of our parallel approach for PSP. 16 classical instances were tested to show that the parallel approach is competitive for solving PSP problem. Conclusions This parallel approach combines various sources of both searching intelligences and energy functions, and thus predicts protein conformations with good quality jointly determined by all the parallel searching threads and energy functions. It provides a framework to combine different searching intelligence embedded in heuristic algorithms. It also constructs a container to hybridize different not-so-accurate objective functions which are usually derived from the domain expertise. PMID:23028708

  20. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  1. Multi-threaded parallel simulation of non-local non-linear problems in ultrashort laser pulse propagation in the presence of plasma

    NASA Astrophysics Data System (ADS)

    Baregheh, Mandana; Mezentsev, Vladimir; Schmitz, Holger

    2011-06-01

    We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor.

  2. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    NASA Astrophysics Data System (ADS)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  3. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  4. A Fully GPU-Based Ray-Driven Backprojector via a Ray-Culling Scheme with Voxel-Level Parallelization for Cone-Beam CT Reconstruction.

    PubMed

    Park, Hyeong-Gyu; Shin, Yeong-Gil; Lee, Ho

    2015-12-01

    A ray-driven backprojector is based on ray-tracing, which computes the length of the intersection between the ray paths and each voxel to be reconstructed. To reduce the computational burden caused by these exhaustive intersection tests, we propose a fully graphics processing unit (GPU)-based ray-driven backprojector in conjunction with a ray-culling scheme that enables straightforward parallelization without compromising the high computing performance of a GPU. The purpose of the ray-culling scheme is to reduce the number of ray-voxel intersection tests by excluding rays irrelevant to a specific voxel computation. This rejection step is based on an axis-aligned bounding box (AABB) enclosing a region of voxel projection, where eight vertices of each voxel are projected onto the detector plane. The range of the rectangular-shaped AABB is determined by min/max operations on the coordinates in the region. Using the indices of pixels inside the AABB, the rays passing through the voxel can be identified and the voxel is weighted as the length of intersection between the voxel and the ray. This procedure makes it possible to reflect voxel-level parallelization, allowing an independent calculation at each voxel, which is feasible for a GPU implementation. To eliminate redundant calculations during ray-culling, a shared-memory optimization is applied to exploit the GPU memory hierarchy. In experimental results using real measurement data with phantoms, the proposed GPU-based ray-culling scheme reconstructed a volume of resolution 28032803176 in 77 seconds from 680 projections of resolution 10243768 , which is 26 times and 7.5 times faster than standard CPU-based and GPU-based ray-driven backprojectors, respectively. Qualitative and quantitative analyses showed that the ray-driven backprojector provides high-quality reconstruction images when compared with those generated by the Feldkamp-Davis-Kress algorithm using a pixel-driven backprojector, with an average of 2.5 times higher contrast-to-noise ratio, 1.04 times higher universal quality index, and 1.39 times higher normalized mutual information. © The Author(s) 2014.

  5. Mobile GPU-based implementation of automatic analysis method for long-term ECG.

    PubMed

    Fan, Xiaomao; Yao, Qihang; Li, Ye; Chen, Runge; Cai, Yunpeng

    2018-05-03

    Long-term electrocardiogram (ECG) is one of the important diagnostic assistant approaches in capturing intermittent cardiac arrhythmias. Combination of miniaturized wearable holters and healthcare platforms enable people to have their cardiac condition monitored at home. The high computational burden created by concurrent processing of numerous holter data poses a serious challenge to the healthcare platform. An alternative solution is to shift the analysis tasks from healthcare platforms to the mobile computing devices. However, long-term ECG data processing is quite time consuming due to the limited computation power of the mobile central unit processor (CPU). This paper aimed to propose a novel parallel automatic ECG analysis algorithm which exploited the mobile graphics processing unit (GPU) to reduce the response time for processing long-term ECG data. By studying the architecture of the sequential automatic ECG analysis algorithm, we parallelized the time-consuming parts and reorganized the entire pipeline in the parallel algorithm to fully utilize the heterogeneous computing resources of CPU and GPU. The experimental results showed that the average executing time of the proposed algorithm on a clinical long-term ECG dataset (duration 23.0 ± 1.0 h per signal) is 1.215 ± 0.140 s, which achieved an average speedup of 5.81 ± 0.39× without compromising analysis accuracy, comparing with the sequential algorithm. Meanwhile, the battery energy consumption of the automatic ECG analysis algorithm was reduced by 64.16%. Excluding energy consumption from data loading, 79.44% of the energy consumption could be saved, which alleviated the problem of limited battery working hours for mobile devices. The reduction of response time and battery energy consumption in ECG analysis not only bring better quality of experience to holter users, but also make it possible to use mobile devices as ECG terminals for healthcare professions such as physicians and health advisers, enabling them to inspect patient ECG recordings onsite efficiently without the need of a high-quality wide-area network environment.

  6. The ALICE data quality monitoring system

    NASA Astrophysics Data System (ADS)

    von Haller, B.; Telesca, A.; Chapeland, S.; Carena, F.; Carena, W.; Chibante Barroso, V.; Costa, F.; Denes, E.; Divià, R.; Fuchs, U.; Simonetti, G.; Soós, C.; Vande Vyvre, P.; ALICE Collaboration

    2011-12-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) is a key element of the Data Acquisition's software chain. It provide shifters with precise and complete information to quickly identify and overcome problems, and as a consequence to ensure acquisition of high quality data. DQM typically involves the online gathering, the analysis by user-defined algorithms and the visualization of monitored data. This paper describes the final design of ALICE'S DQM framework called AMORE (Automatic MOnitoRing Environment), as well as its latest and coming features like the integration with the offline analysis and reconstruction framework, a better use of multi-core processors by a parallelization effort, and its interface with the eLogBook. The concurrent collection and analysis of data in an online environment requires the framework to be highly efficient, robust and scalable. We will describe what has been implemented to achieve these goals and the procedures we follow to ensure appropriate robustness and performance. We finally review the wide range of usages people make of this framework, from the basic monitoring of a single sub-detector to the most complex ones within the High Level Trigger farm or using the Prompt Reconstruction and we describe the various ways of accessing the monitoring results. We conclude with our experience, before and after the LHC startup, when monitoring the data quality in a challenging environment.

  7. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  8. DNA extraction for streamlined metagenomics of diverse environmental samples.

    PubMed

    Marotz, Clarisse; Amir, Amnon; Humphrey, Greg; Gaffney, James; Gogul, Grant; Knight, Rob

    2017-06-01

    A major bottleneck for metagenomic sequencing is rapid and efficient DNA extraction. Here, we compare the extraction efficiencies of three magnetic bead-based platforms (KingFisher, epMotion, and Tecan) to a standardized column-based extraction platform across a variety of sample types, including feces, oral, skin, soil, and water. Replicate sample plates were extracted and prepared for 16S rRNA gene amplicon sequencing in parallel to assess extraction bias and DNA quality. The data demonstrate that any effect of extraction method on sequencing results was small compared with the variability across samples; however, the KingFisher platform produced the largest number of high-quality reads in the shortest amount of time. Based on these results, we have identified an extraction pipeline that dramatically reduces sample processing time without sacrificing bacterial taxonomic or abundance information.

  9. Customer care in the NHS.

    PubMed

    Ruddick, Fred

    2015-01-20

    Viewing individuals in need of NHS care as customers has the potential to refocus the way their care is delivered. This article highlights some of the benefits of reframing the nurse-patient relationship in terms of customer care, and draws parallels between good customer care and the provision of high quality patient care in the NHS. It explores lessons to be learned from those who have studied the customer experience, which can be adapted to enhance the customer care experience within the health service. Developing professional expertise in the knowledge and skills that underpin good-quality interpersonal encounters is essential to improve the customer experience in health care and should be prioritised alongside the development of more technical skills. Creating a culture where emotional intelligence, caring and compassion are essential requirements for all nursing staff will improve patient satisfaction.

  10. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  11. Comparison of cavity preparation quality using an electric motor handpiece and an air turbine dental handpiece.

    PubMed

    Kenyon, Brian J; Van Zyl, Ian; Louie, Kenneth G

    2005-08-01

    The high-speed high-torque (electric motor) handpiece is becoming more popular in dental offices and laboratories in the United States. It is reported to cut more precisely and to assist in the creation of finer margins that enhance cavity preparations. The authors conducted an in vitro study to compare the quality of cavity preparations fabricated with a high-speed high-torque (electric motor) handpiece and a high-speed low-torque (air turbine) handpiece. Eighty-six dental students each cut two Class I preparations, one with an air turbine handpiece and the other with an electric motor high-speed handpiece. The authors asked the students to cut each preparation accurately to a circular outline and to establish a flat pulpal floor with 1.5 millimeters' depth, 90-degree exit angles, parallel vertical walls and sharp internal line angles, as well as to refine the preparation to achieve flat, smooth walls with a well-defined cavosurface margin. A single faculty member scored the preparations for criteria and refinement using a nine-point scale (range, 1-9). The authors analyzed the data statistically using paired t tests. In preparation criteria, the electric motor high-speed handpiece had a higher average grade than did the air turbine handpiece (5.07 and 4.90, respectively). For refinement, the average grade for the air turbine high-speed handpiece was greater than that for the electric motor high-speed handpiece (5.72 and 5.52, respectively). The differences were not statistically significant. The electric motor high-speed handpiece performed as well as, but not better than, the air turbine handpiece in the fabrication of high-quality cavity preparations.

  12. A Semi-flexible 64-channel Receive-only Phased Array for Pediatric Body MRI at 3T

    PubMed Central

    Zhang, Tao; Grafendorfer, Thomas; Cheng, Joseph Y.; Ning, Peigang; Rainey, Bob; Giancola, Mark; Ortman, Sarah; Robb, Fraser J.; Calderon, Paul D.; Hargreaves, Brian A.; Lustig, Michael; Scott, Greig C.; Pauly, John M.; Vasanawala, Shreyas S.

    2015-01-01

    Purpose To design, construct, and validate a semi-flexible 64-channel receive-only phased array for pediatric body MRI at 3T. Methods A 64-channel receive-only phased array was developed and constructed. The designed flexible coil can easily conform to different patient sizes with non-overlapping coil elements in the transverse plane. It can cover a field of view of up to 44 × 28 cm2 and removes the need for coil repositioning for body MRI patients with multiple clinical concerns. The 64-channel coil was compared with a 32-channel standard coil for signal-to-noise ratio (SNR) and parallel imaging performances on different phantoms. With IRB approval and informed consent/assent, the designed coil was validated on 21 consecutive pediatric patients. Results The pediatric coil provided higher SNR than the standard coil on different phantoms, with the averaged SNR gain at least 23% over a depth of 7 cm along the cross-section of phantoms. It also achieved better parallel imaging performance under moderate acceleration factors. Good image quality (average score 4.6 out of 5) was achieved using the developed pediatric coil in the clinical studies. Conclusion A 64-channel semi-flexible receive-only phased array has been developed and validated to facilitate high quality pediatric body MRI at 3T. PMID:26418283

  13. Parallel experimental design and multivariate analysis provides efficient screening of cell culture media supplements to improve biosimilar product quality.

    PubMed

    Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin

    2017-07-01

    Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;114: 1448-1458. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Parallel Processing of Large Scale Microphone Arrays for Sound Capture

    NASA Astrophysics Data System (ADS)

    Jan, Ea-Ee.

    1995-01-01

    Performance of microphone sound pick up is degraded by deleterious properties of the acoustic environment, such as multipath distortion (reverberation) and ambient noise. The degradation becomes more prominent in a teleconferencing environment in which the microphone is positioned far away from the speaker. Besides, the ideal teleconference should feel as easy and natural as face-to-face communication with another person. This suggests hands-free sound capture with no tether or encumbrance by hand-held or body-worn sound equipment. Microphone arrays for this application represent an appropriate approach. This research develops new microphone array and signal processing techniques for high quality hands-free sound capture in noisy, reverberant enclosures. The new techniques combine matched-filtering of individual sensors and parallel processing to provide acute spatial volume selectivity which is capable of mitigating the deleterious effects of noise interference and multipath distortion. The new method outperforms traditional delay-and-sum beamformers which provide only directional spatial selectivity. The research additionally explores truncated matched-filtering and random distribution of transducers to reduce complexity and improve sound capture quality. All designs are first established by computer simulation of array performance in reverberant enclosures. The simulation is achieved by a room model which can efficiently calculate the acoustic multipath in a rectangular enclosure up to a prescribed order of images. It also calculates the incident angle of the arriving signal. Experimental arrays were constructed and their performance was measured in real rooms. Real room data were collected in a hard-walled laboratory and a controllable variable acoustics enclosure of similar size, approximately 6 x 6 x 3 m. An extensive speech database was also collected in these two enclosures for future research on microphone arrays. The simulation results are shown to be consistent with the real room data. Localization of sound sources has been explored using cross-power spectrum time delay estimation and has been evaluated using real room data under slightly, moderately and highly reverberant conditions. To improve the accuracy and reliability of the source localization, an outlier detector that removes incorrect time delay estimation has been invented. To provide speaker selectivity for microphone array systems, a hands-free speaker identification system has been studied. A recently invented feature using selected spectrum information outperforms traditional recognition methods. Measured results demonstrate the capabilities of speaker selectivity from a matched-filtered array. In addition, simulation utilities, including matched -filtering processing of the array and hands-free speaker identification, have been implemented on the massively -parallel nCube super-computer. This parallel computation highlights the requirements for real-time processing of array signals.

  15. Rotating single-shot acquisition (RoSA) with composite reconstruction for fast high-resolution diffusion imaging.

    PubMed

    Wen, Qiuting; Kodiweera, Chandana; Dale, Brian M; Shivraman, Giri; Wu, Yu-Chien

    2018-01-01

    To accelerate high-resolution diffusion imaging, rotating single-shot acquisition (RoSA) with composite reconstruction is proposed. Acceleration was achieved by acquiring only one rotating single-shot blade per diffusion direction, and high-resolution diffusion-weighted (DW) images were reconstructed by using similarities of neighboring DW images. A parallel imaging technique was implemented in RoSA to further improve the image quality and acquisition speed. RoSA performance was evaluated by simulation and human experiments. A brain tensor phantom was developed to determine an optimal blade size and rotation angle by considering similarity in DW images, off-resonance effects, and k-space coverage. With the optimal parameters, RoSA MR pulse sequence and reconstruction algorithm were developed to acquire human brain data. For comparison, multishot echo planar imaging (EPI) and conventional single-shot EPI sequences were performed with matched scan time, resolution, field of view, and diffusion directions. The simulation indicated an optimal blade size of 48 × 256 and a 30 ° rotation angle. For 1 × 1 mm 2 in-plane resolution, RoSA was 12 times faster than the multishot acquisition with comparable image quality. With the same acquisition time as SS-EPI, RoSA provided superior image quality and minimum geometric distortion. RoSA offers fast, high-quality, high-resolution diffusion images. The composite image reconstruction is model-free and compatible with various diffusion computation approaches including parametric and nonparametric analyses. Magn Reson Med 79:264-275, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. Solution-processed parallel tandem polymer solar cells using silver nanowires as intermediate electrode.

    PubMed

    Guo, Fei; Kubis, Peter; Li, Ning; Przybilla, Thomas; Matt, Gebhard; Stubhan, Tobias; Ameri, Tayebeh; Butz, Benjamin; Spiecker, Erdmann; Forberich, Karen; Brabec, Christoph J

    2014-12-23

    Tandem architecture is the most relevant concept to overcome the efficiency limit of single-junction photovoltaic solar cells. Series-connected tandem polymer solar cells (PSCs) have advanced rapidly during the past decade. In contrast, the development of parallel-connected tandem cells is lagging far behind due to the big challenge in establishing an efficient interlayer with high transparency and high in-plane conductivity. Here, we report all-solution fabrication of parallel tandem PSCs using silver nanowires as intermediate charge collecting electrode. Through a rational interface design, a robust interlayer is established, enabling the efficient extraction and transport of electrons from subcells. The resulting parallel tandem cells exhibit high fill factors of ∼60% and enhanced current densities which are identical to the sum of the current densities of the subcells. These results suggest that solution-processed parallel tandem configuration provides an alternative avenue toward high performance photovoltaic devices.

  17. Seismic imaging using finite-differences and parallel computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ober, C.C.

    1997-12-31

    A key to reducing the risks and costs of associated with oil and gas exploration is the fast, accurate imaging of complex geologies, such as salt domes in the Gulf of Mexico and overthrust regions in US onshore regions. Prestack depth migration generally yields the most accurate images, and one approach to this is to solve the scalar wave equation using finite differences. As part of an ongoing ACTI project funded by the US Department of Energy, a finite difference, 3-D prestack, depth migration code has been developed. The goal of this work is to demonstrate that massively parallel computersmore » can be used efficiently for seismic imaging, and that sufficient computing power exists (or soon will exist) to make finite difference, prestack, depth migration practical for oil and gas exploration. Several problems had to be addressed to get an efficient code for the Intel Paragon. These include efficient I/O, efficient parallel tridiagonal solves, and high single-node performance. Furthermore, to provide portable code the author has been restricted to the use of high-level programming languages (C and Fortran) and interprocessor communications using MPI. He has been using the SUNMOS operating system, which has affected many of his programming decisions. He will present images created from two verification datasets (the Marmousi Model and the SEG/EAEG 3D Salt Model). Also, he will show recent images from real datasets, and point out locations of improved imaging. Finally, he will discuss areas of current research which will hopefully improve the image quality and reduce computational costs.« less

  18. Equilibrium properties of superconducting niobium at high magnetic fields: A possible existence of a filamentary state in type-II superconductors [Possible existence of a filamentary state in type-II superconductors

    DOE PAGES

    Kozhevnikov, V.; Valente-Feliciano, A. -M.; Curran, P. J.; ...

    2017-05-17

    The standard interpretation of the phase diagram of type-II superconductors was developed in the 1960s and has since been considered a well-established part of classical superconductivity. However, upon closer examination a number of fundamental issues arises that leads one to question this standard picture. To address these issues we studied equilibrium properties of niobium samples near and above the upper critical field H c2 in parallel and perpendicular magnetic fields. The samples investigated were very high quality films and single-crystal disks with the Ginzburg-Landau parameters 0.8 and 1.3, respectively. A range of complementary measurements has been performed, which include dcmore » magnetometry, electrical transport, muon spin rotation spectroscopy, and scanning Hall-probe microscopy. Contrary to the standard scenario, we observed that a superconducting phase is present in the sample bulk above H c2 and the field H c3 is the same in both parallel and perpendicular fields. Our findings suggest that above H c2 the superconducting phase forms filaments parallel to the field regardless of the field orientation. Near H c2 the filaments preserve the hexagonal structure of the preceding vortex lattice of the mixed state, and the filament density continuously falls to zero at H c3. Finally, our paper has important implications for the correct interpretation of the properties of type-II superconductors and can be essential for practical applications of these materials.« less

  19. Evidence regarding lingual fixed orthodontic appliances' therapeutic and adverse effects is insufficient.

    PubMed

    Afrashtehfar, Kelvin I

    2016-06-01

    Data sourcesMedline, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Cochrane Central Register of Controlled Trials, Virtual Health Library and Web of Science were systematically searched up to July 2015 without limitations. Scopus, Google Scholar, ClinicalTrials.gov, the ISRCTN registry as well as reference lists of the trials included and relevant reviews were manually searched.Study selectionRandomised (RCTs) and prospective non-randomised clinical trials (non-RCTs) on human patients that compared therapeutic and adverse effects of lingual and labial appliances were considered. One reviewer initially screened titles and subsequently two reviewers independently screened the selected abstracts and full texts.Data extraction and synthesisThe data were extracted independently by the reviewers. Missing or unclear information, ongoing trials and raw data from split-mouth trials were requested from the authors of the trials. The quality of the included trials and potential bias across studies were assessed using Cochrane's risk of bias tool and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. For parallel trials, mean difference (MD) and the relative risk (RR) were used for continuous (objective speech performance, subjective speech performance, intercanine width, intermolar width and sagittal anchorage loss) and binary outcomes (eating difficulty), respectively. The standardised mean difference (SMD) was chosen to pool, after conversion, the outcome (oral discomfort) that assessed both binary and continuous. Random-effects meta-analyses were conducted, followed by subgroup and sensitivity analyses.ResultsThirteen papers pertaining to 11 clinical trials (three parallel RCTs, one split-mouth RCT and seven parallel prospective non-RCTs) were included with a total of 407 (34% male/66% female) patients. All trials had at least one bias domain at high risk of bias. Compared with labial appliances, lingual appliances were associated with increased overall oral discomfort, increased speech impediment (measured using auditory analysis), worse speech performance assessed by laypersons, increased eating difficulty and decreased intermolar width. On the other hand, lingual appliances were associated with increased intercanine width and significantly decreased anchorage loss of the maxillary first molar during space closure. However, the quality of all analyses included was judged as very low because of the high risk of bias of the included trials, inconsistency and imprecision.ConclusionsBased on existing trials there is insufficient evidence to make robust recommendations for lingual fixed orthodontic appliances regarding their therapeutic or adverse effects, as the quality of evidence was low.

  20. An Anthropologist's Reflections on Defining Quality in Education Research

    ERIC Educational Resources Information Center

    Tobin, Joseph

    2007-01-01

    In the USA there is a contemporary discourse of crisis about the state of education and a parallel discourse that lays a large portion of the blame onto the poor quality of educational research. The solution offered is "scientific research." This article presents critiques of the core assumptions of the scientific research as secure argument.…

  1. Teaching RLC Parallel Circuits in High-School Physics Class

    ERIC Educational Resources Information Center

    Simon, Alpár

    2015-01-01

    This paper will try to give an alternative treatment of the subject "parallel RLC circuits" and "resonance in parallel RLC circuits" from the Physics curricula for the XIth grade from Romanian high-schools, with an emphasis on practical type circuits and their possible applications, and intends to be an aid for both Physics…

  2. Noise-immune multisensor transduction of speech

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vishu R.; Henry, Claudia M.; Derr, Alan G.; Roucos, Salim; Schwartz, Richard M.

    1986-08-01

    Two types of configurations of multiple sensors were developed, tested and evaluated in speech recognition application for robust performance in high levels of acoustic background noise: One type combines the individual sensor signals to provide a single speech signal input, and the other provides several parallel inputs. For single-input systems, several configurations of multiple sensors were developed and tested. Results from formal speech intelligibility and quality tests in simulated fighter aircraft cockpit noise show that each of the two-sensor configurations tested outperforms the constituent individual sensors in high noise. Also presented are results comparing the performance of two-sensor configurations and individual sensors in speaker-dependent, isolated-word speech recognition tests performed using a commercial recognizer (Verbex 4000) in simulated fighter aircraft cockpit noise.

  3. Investigation of multichannel phased array performance for fetal MR imaging on 1.5T clinical MR system

    PubMed Central

    Li, Ye; Pang, Yong; Vigneron, Daniel; Glenn, Orit; Xu, Duan; Zhang, Xiaoliang

    2011-01-01

    Fetal MRI on 1.5T clinical scanner has been increasingly becoming a powerful imaging tool for studying fetal brain abnormalities in vivo. Due to limited availability of dedicated fetal phased arrays, commercial torso or cardiac phased arrays are routinely used for fetal scans, which are unable to provide optimized SNR and parallel imaging performance with a small number coil elements, and insufficient coverage and filling factor. This poses a demand for the investigation and development of dedicated and efficient radiofrequency (RF) hardware to improve fetal imaging. In this work, an investigational approach to simulate the performance of multichannel flexible phased arrays is proposed to find a better solution to fetal MR imaging. A 32 channel fetal array is presented to increase coil sensitivity, coverage and parallel imaging performance. The electromagnetic field distribution of each element of the fetal array is numerically simulated by using finite-difference time-domain (FDTD) method. The array performance, including B1 coverage, parallel reconstructed images and artifact power, is then theoretically calculated and compared with the torso array. Study results show that the proposed array is capable of increasing B1 field strength as well as sensitivity homogeneity in the entire area of uterus. This would ensure high quality imaging regardless of the location of the fetus in the uterus. In addition, the paralleling imaging performance of the proposed fetal array is validated by using artifact power comparison with torso array. These results demonstrate the feasibility of the 32 channel flexible array for fetal MR imaging at 1.5T. PMID:22408747

  4. Lossless data compression for improving the performance of a GPU-based beamformer.

    PubMed

    Lok, U-Wai; Fan, Gang-Wei; Li, Pai-Chi

    2015-04-01

    The powerful parallel computation ability of a graphics processing unit (GPU) makes it feasible to perform dynamic receive beamforming However, a real time GPU-based beamformer requires high data rate to transfer radio-frequency (RF) data from hardware to software memory, as well as from central processing unit (CPU) to GPU memory. There are data compression methods (e.g. Joint Photographic Experts Group (JPEG)) available for the hardware front end to reduce data size, alleviating the data transfer requirement of the hardware interface. Nevertheless, the required decoding time may even be larger than the transmission time of its original data, in turn degrading the overall performance of the GPU-based beamformer. This article proposes and implements a lossless compression-decompression algorithm, which enables in parallel compression and decompression of data. By this means, the data transfer requirement of hardware interface and the transmission time of CPU to GPU data transfers are reduced, without sacrificing image quality. In simulation results, the compression ratio reached around 1.7. The encoder design of our lossless compression approach requires low hardware resources and reasonable latency in a field programmable gate array. In addition, the transmission time of transferring data from CPU to GPU with the parallel decoding process improved by threefold, as compared with transferring original uncompressed data. These results show that our proposed lossless compression plus parallel decoder approach not only mitigate the transmission bandwidth requirement to transfer data from hardware front end to software system but also reduce the transmission time for CPU to GPU data transfer. © The Author(s) 2014.

  5. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  6. S-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Simon, H.

    1998-01-01

    Computational science problems with adaptive meshes involve dynamic load balancing when implemented on parallel machines. This dynamic load balancing requires fast partitioning of computational meshes at run time. The authors present in this report a fast parallel dynamic partitioner, called S-HARP. The underlying principles of S-HARP are the fast feature of inertial partitioning and the quality feature of spectral partitioning. S-HARP partitions a graph from scratch, requiring no partition information from previous iterations. Two types of parallelism have been exploited in S-HARP, fine grain loop level parallelism and coarse grain recursive parallelism. The parallel partitioner has been implemented in Messagemore » Passing Interface on Cray T3E and IBM SP2 for portability. Experimental results indicate that S-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.2 seconds on a 64 processor Cray T3E. S-HARP is much more scalable than other dynamic partitioners, giving over 15 fold speedup on 64 processors while ParaMeTiS1.0 gives a few fold speedup. Experimental results demonstrate that S-HARP is three to 10 times faster than the dynamic partitioners ParaMeTiS and Jostle on six computational meshes of size over 100,000 vertices.« less

  7. Advanced Capacitor with SiC for High Temperature Applications

    NASA Astrophysics Data System (ADS)

    Tsao, B. H.; Ramalingam, M. L.; Bhattacharya, R. S.; Carr, Sandra Fries

    1994-07-01

    An advanced capacitor using SiC as the dielectric material has been developed for high temperature, high power, and high density electronic components for aircraft and aerospace application. The conventional capacitor consists of a large number of metallized polysulfone films that are arranged in parallel and enclosed in a sealed metal case. However, problems with electrical failure, thermal failure, and dielectric flow were experienced by Air Force suppliers for the component and subsystem for lack of suitable properties of the dielectric material. The high breakdown electrical field, high thermal conductivity, and high temperature operational resistance of SiC compared to similar properties of the conventional ceramic and polymer capacitor would make it a better choice for a high temperature, and high power capacitor. The quality of the SiC film was evaluated. The electrical parameters, such as the capacitance, dissipation factor, equivalent series resistance, and dielectric withstand voltage, were evaluated. The prototypical capacitors are currently being fabricated using SiC film.

  8. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  9. Lasers for industrial production processing: tailored tools with increasing flexibility

    NASA Astrophysics Data System (ADS)

    Rath, Wolfram

    2012-03-01

    High-power fiber lasers are the newest generation of diode-pumped solid-state lasers. Due to their all-fiber design they are compact, efficient and robust. Rofin's Fiber lasers are available with highest beam qualities but the use of different process fiber core sizes enables the user additionally to adapt the beam quality, focus size and Rayleigh length to his requirements for best processing results. Multi-mode fibers from 50μm to 600μm with corresponding beam qualities of 2.5 mm.mrad to 25 mm.mrad are typically used. The integrated beam switching modules can make the laser power available to 4 different manufacturing systems or can share the power to two processing heads for parallel processing. Also CO2 Slab lasers combine high power with either "single-mode" beam quality or higher order modes. The wellestablished technique is in use for a large number of industrial applications, processing either metals or non-metallic materials. For many of these applications CO2 lasers remain the best choice of possible laser sources either driven by the specific requirements of the application or because of the cost structure of the application. The actual technical properties of these lasers will be presented including an overview over the wavelength driven differences of application results, examples of current industrial practice as cutting, welding, surface processing including the flexible use of scanners and classical optics processing heads.

  10. Development of a Premium Quality Plasma-derived IVIg (IQYMUNE®) Utilizing the Principles of Quality by Design-A Worked-through Case Study.

    PubMed

    Paolantonacci, Philippe; Appourchaux, Philippe; Claudel, Béatrice; Ollivier, Monique; Dennett, Richard; Siret, Laurent

    2018-01-01

    Polyvalent human normal immunoglobulins for intravenous use (IVIg), indicated for rare and often severe diseases, are complex plasma-derived protein preparations. A quality by design approach has been used to develop the Laboratoire Français du Fractionnement et des Biotechnologies new-generation IVIg, targeting a high level of purity to generate an enhanced safety profile while maintaining a high level of efficacy. A modular approach of quality by design was implemented consisting of five consecutive steps to cover all the stages from the product design to the final product control strategy.A well-defined target product profile was translated into 27 product quality attributes that formed the basis of the process design. In parallel, a product risk analysis was conducted and identified 19 critical quality attributes among the product quality attributes. Process risk analysis was carried out to establish the links between process parameters and critical quality attributes. Twelve critical steps were identified, and for each of these steps a risk mitigation plan was established.Among the different process risk mitigation exercises, five process robustness studies were conducted at qualified small scale with a design of experiment approach. For each process step, critical process parameters were identified and, for each critical process parameter, proven acceptable ranges were established. The quality risk management and risk mitigation outputs, including verification of proven acceptable ranges, were used to design the process verification exercise at industrial scale.Finally, the control strategy was established using a mix, or hybrid, of the traditional approach plus elements of the quality by design enhanced approach, as illustrated, to more robustly assign material and process controls and in order to securely meet product specifications.The advantages of this quality by design approach were improved process knowledge for industrial design and process validation and a clear justification of the process and product specifications as a basis for control strategy and future comparability exercises. © PDA, Inc. 2018.

  11. Parallel discrete-event simulation of FCFS stochastic queueing networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Physical systems are inherently parallel. Intuition suggests that simulations of these systems may be amenable to parallel execution. The parallel execution of a discrete-event simulation requires careful synchronization of processes in order to ensure the execution's correctness; this synchronization can degrade performance. Largely negative results were recently reported in a study which used a well-known synchronization method on queueing network simulations. Discussed here is a synchronization method (appointments), which has proven itself to be effective on simulations of FCFS queueing networks. The key concept behind appointments is the provision of lookahead. Lookahead is a prediction on a processor's future behavior, based on an analysis of the processor's simulation state. It is shown how lookahead can be computed for FCFS queueing network simulations, give performance data that demonstrates the method's effectiveness under moderate to heavy loads, and discuss performance tradeoffs between the quality of lookahead, and the cost of computing lookahead.

  12. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  13. A cloud-based framework for large-scale traditional Chinese medical record retrieval.

    PubMed

    Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin

    2018-01-01

    Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.

  14. Direct determination of k Q factors for cylindrical and plane-parallel ionization chambers in high-energy electron beams from 6 MeV to 20 MeV.

    PubMed

    Krauss, A; Kapsch, R-P

    2018-02-06

    For the ionometric determination of the absorbed dose to water, D w , in high-energy electron beams from a clinical accelerator, beam quality dependent correction factors, k Q , are required. By using a water calorimeter, these factors can be determined experimentally and potentially with lower standard uncertainties than those of the calculated k Q factors, which are tabulated in various dosimetry protocols. However, one of the challenges of water calorimetry in electron beams is the small measurement depths in water, together with the steep dose gradients present especially at lower energies. In this investigation, water calorimetry was implemented in electron beams to determine k Q factors for different types of cylindrical and plane-parallel ionization chambers (NE2561, NE2571, FC65-G, TM34001) in 10 cm  ×  10 cm electron beams from 6 MeV to 20 MeV (corresponding beam quality index R 50 ranging from 1.9 cm to 7.5 cm). The measurements were carried out using the linear accelerator facility of the Physikalisch-Technische Bundesanstalt. Relative standard uncertainties for the k Q factors between 0.50% for the 20 MeV beam and 0.75% for the 6 MeV beam were achieved. For electron energies above 8 MeV, general agreement was found between the relative electron energy dependencies of the k Q factors measured and those derived from the AAPM TG-51 protocol and recent Monte Carlo-based studies, as well as those from other experimental investigations. However, towards lower energies, discrepancies of up to 2.0% occurred for the k Q factors of the TM34001 and the NE2571 chamber.

  15. Using Intel's Knight Landing Processor to Accelerate Global Nested Air Quality Prediction Modeling System (GNAQPMS) Model

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, H.; Chen, X.; Wu, Q.; Wang, Z.

    2016-12-01

    The Global Nested Air Quality Prediction Modeling System for Hg (GNAQPMS-Hg) is a global chemical transport model coupled Hg transport module to investigate the mercury pollution. In this study, we present our work of transplanting the GNAQPMS model on Intel Xeon Phi processor, Knights Landing (KNL) to accelerate the model. KNL is the second-generation product adopting Many Integrated Core Architecture (MIC) architecture. Compared with the first generation Knight Corner (KNC), KNL has more new hardware features, that it can be used as unique processor as well as coprocessor with other CPU. According to the Vtune tool, the high overhead modules in GNAQPMS model have been addressed, including CBMZ gas chemistry, advection and convection module, and wet deposition module. These high overhead modules were accelerated by optimizing code and using new techniques of KNL. The following optimized measures was done: 1) Changing the pure MPI parallel mode to hybrid parallel mode with MPI and OpenMP; 2.Vectorizing the code to using the 512-bit wide vector computation unit. 3. Reducing unnecessary memory access and calculation. 4. Reducing Thread Local Storage (TLS) for common variables with each OpenMP thread in CBMZ. 5. Changing the way of global communication from files writing and reading to MPI functions. After optimization, the performance of GNAQPMS is greatly increased both on CPU and KNL platform, the single-node test showed that optimized version has 2.6x speedup on two sockets CPU platform and 3.3x speedup on one socket KNL platform compared with the baseline version code, which means the KNL has 1.29x speedup when compared with 2 sockets CPU platform.

  16. Direct determination of k Q factors for cylindrical and plane-parallel ionization chambers in high-energy electron beams from 6 MeV to 20 MeV

    NASA Astrophysics Data System (ADS)

    Krauss, A.; Kapsch, R.-P.

    2018-02-01

    For the ionometric determination of the absorbed dose to water, D w, in high-energy electron beams from a clinical accelerator, beam quality dependent correction factors, k Q, are required. By using a water calorimeter, these factors can be determined experimentally and potentially with lower standard uncertainties than those of the calculated k Q factors, which are tabulated in various dosimetry protocols. However, one of the challenges of water calorimetry in electron beams is the small measurement depths in water, together with the steep dose gradients present especially at lower energies. In this investigation, water calorimetry was implemented in electron beams to determine k Q factors for different types of cylindrical and plane-parallel ionization chambers (NE2561, NE2571, FC65-G, TM34001) in 10 cm  ×  10 cm electron beams from 6 MeV to 20 MeV (corresponding beam quality index R 50 ranging from 1.9 cm to 7.5 cm). The measurements were carried out using the linear accelerator facility of the Physikalisch-Technische Bundesanstalt. Relative standard uncertainties for the k Q factors between 0.50% for the 20 MeV beam and 0.75% for the 6 MeV beam were achieved. For electron energies above 8 MeV, general agreement was found between the relative electron energy dependencies of the k Q factors measured and those derived from the AAPM TG-51 protocol and recent Monte Carlo-based studies, as well as those from other experimental investigations. However, towards lower energies, discrepancies of up to 2.0% occurred for the k Q factors of the TM34001 and the NE2571 chamber.

  17. P-HARP: A parallel dynamic spectral partitioner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Biswas, R.; Simon, H.D.

    1997-05-01

    Partitioning unstructured graphs is central to the parallel solution of problems in computational science and engineering. The authors have introduced earlier the sequential version of an inertial spectral partitioner called HARP which maintains the quality of recursive spectral bisection (RSB) while forming the partitions an order of magnitude faster than RSB. The serial HARP is known to be the fastest spectral partitioner to date, three to four times faster than similar partitioners on a variety of meshes. This paper presents a parallel version of HARP, called P-HARP. Two types of parallelism have been exploited: loop level parallelism and recursive parallelism.more » P-HARP has been implemented in MPI on the SGI/Cray T3E and the IBM SP2. Experimental results demonstrate that P-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.25 seconds on a 64-processor T3E. Experimental results further show that P-HARP can give nearly a 20-fold speedup on 64 processors. These results indicate that graph partitioning is no longer a major bottleneck that hinders the advancement of computational science and engineering for dynamically-changing real-world applications.« less

  18. Isoflavones, calcium, vitamin D and inulin improve quality of life, sexual function, body composition and metabolic parameters in menopausal women: result from a prospective, randomized, placebo-controlled, parallel-group study

    PubMed Central

    Caruso, Salvatore; Rapisarda, Agnese Maria Chiara; Cianci, Stefano; Cianci, Antonio

    2018-01-01

    Introduction Menopause results in metabolic changes that contribute to increase risk of cardiovascular diseases: increase in low density lipoprotein (LDL) and triglycerides and decrease in high density lipoprotein (HDL), weight gain are associated with a correspondent increase in incidence of hypertension and diabetes. The aim of this study was to evaluate the effect of a preparation of isoflavones, calcium vitamin D and inulin in menopausal women. Material and methods We performed a prospective, randomized, placebo-controlled, parallel-group study. A total of 50 patients were randomized to receive either oral preparations of isoflavones (40 mg), calcium (500 mg) vitamin D (300 UI) and inulin (3 g) or placebo (control group). Pre- and post-treatment assessment of quality of life and sexual function were performed through Menopause-Specific Quality of Life Questionnaire (MENQOL) and Female Sexual Function Index (FSFI); evaluations of anthropometric indicators, body composition through bioelectrical impedance analyser, lumbar spine and proximal femur T-score and lipid profile were performed. Results After 12 months, a significant reduction in MENQOL vasomotor, physical and sexual domain scores (p < 0.05) and a significant increase in all FSFI domain scores (p < 0.05) were observed in treatment group. Laboratory tests showed significant increase in serum levels of HDL (p < 0.05). No significant changes of lumbar spine and femur neck T-score (p > 0.05) were found in the same group. Conclusions According to our data analysis, isoflavones, calcium, vitamin D and inulin may exert favourable effects on menopausal symptoms and signs. PMID:29725283

  19. Effect of irradiation, active and modified atmosphere packaging, container oxygen barrier and storage conditions on the physicochemical and sensory properties of raw unpeeled almond kernels (Prunus dulcis).

    PubMed

    Mexis, Stamatios F; Riganakos, Kyriakos A; Kontominas, Michael G

    2011-03-15

    The present study investigated the effect of irradiation, active and modified atmosphere packaging, and storage conditions on quality retention of raw, whole, unpeeled almonds. Almond kernels were packaged in barrier and high-barrier pouches, under N(2) or with an O(2) absorber and stored either under fluorescent lighting or in the dark at 20 °C for 12 months. Quality parameters monitored were peroxide value, hexanal content, colour, fatty acid composition and volatile compounds. Of the sensory attributes colour, texture, odour and taste were evaluated. Peroxide value and hexanal increased with dose of irradiation and storage time. Irradiation resulted in a decrease of polyunsaturated and monounsaturated fatty acids during storage with a parallel increase of saturated fatty acids. Volatile compounds were not affected by irradiation but increased with storage time indicating enhanced lipid oxidation. Colour parameters of samples remained unaffected immediately after irradiation. For samples packaged under a N(2) , atmosphere L and b values decreased during storage with a parallel increase of value a resulting to gradual product darkening especially in irradiated samples. Non-irradiated almonds retained acceptable quality for ca. 12 months stored at 20 °C with the O(2) absorber irrespective of lighting conditions and packaging material oxygen barrier. The respective shelf life for samples irradiated at 1.0 kGy was 12 months packaged in PET-SiOx//LDPE irrespective of lighting conditions and 12 months for samples irradiated at 3 kGy packaged in PET-SiOx//LDPE stored in the dark. Copyright © 2010 Society of Chemical Industry.

  20. Global Magnetohydrodynamic Simulation Using High Performance FORTRAN on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Ogino, T.

    High Performance Fortran (HPF) is one of modern and common techniques to achieve high performance parallel computation. We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5 VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.

  1. Understanding and managing experiential aspects of soundscapes at Muir woods national monument.

    PubMed

    Pilcher, Ericka J; Newman, Peter; Manning, Robert E

    2009-03-01

    Research has found that human-caused noise can detract from the quality of the visitor experience in national parks and related areas. Moreover, impacts to the visitor experience can be managed by formulating indicators and standards of quality as suggested in park and outdoor recreation management frameworks, such as Visitor Experience and Resource Protection (VERP), as developed by the U.S. National Park Service. The research reported in this article supports the formulation of indicators and standards of quality for human-caused noise at Muir Woods National Monument, California. Phase I identified potential indicators of quality for the soundscape of Muir Woods. A visitor "listening exercise" was conducted, where respondents identified natural and human-caused sounds heard in the park and rated the degree to which each sound was "pleasing" or "annoying." Certain visitor-caused sounds such as groups talking were heard by most respondents and were rated as annoying, suggesting that these sounds may be a good indicator of quality. Loud groups were heard by few people but were rated as highly annoying, whereas wind and water were heard by most visitors and were rated as highly pleasing. Phase II measured standards of quality for visitor-caused noise. Visitors were presented with a series of 30-second audio clips representing increasing amounts of visitor-caused sound in the park. Respondents were asked to rate the acceptability of each audio clip on a survey. Findings suggest a threshold at which visitor-caused sound is judged to be unacceptable, and is therefore considered as noise. A parallel program of sound monitoring in the park found that current levels of visitor-caused sound sometimes violate this threshold. Study findings provide an empirical basis to help formulate noise-related indicators and standards of quality in parks and related areas.

  2. Characterizing parallel file-access patterns on a large-scale multiprocessor

    NASA Technical Reports Server (NTRS)

    Purakayastha, A.; Ellis, Carla; Kotz, David; Nieuwejaar, Nils; Best, Michael L.

    1995-01-01

    High-performance parallel file systems are needed to satisfy tremendous I/O requirements of parallel scientific applications. The design of such high-performance parallel file systems depends on a comprehensive understanding of the expected workload, but so far there have been very few usage studies of multiprocessor file systems. This paper is part of the CHARISMA project, which intends to fill this void by measuring real file-system workloads on various production parallel machines. In particular, we present results from the CM-5 at the National Center for Supercomputing Applications. Our results are unique because we collect information about nearly every individual I/O request from the mix of jobs running on the machine. Analysis of the traces leads to various recommendations for parallel file-system design.

  3. Note: long range and accurate measurement of deep trench microstructures by a specialized scanning tunneling microscope.

    PubMed

    Ju, Bing-Feng; Chen, Yuan-Liu; Zhang, Wei; Zhu, Wule; Jin, Chao; Fang, F Z

    2012-05-01

    A compact but practical scanning tunneling microscope (STM) with high aspect ratio and high depth capability has been specially developed. Long range scanning mechanism with tilt-adjustment stage is adopted for the purpose of adjusting the probe-sample relative angle to compensate the non-parallel effects. A periodical trench microstructure with a pitch of 10 μm has been successfully imaged with a long scanning range up to 2.0 mm. More innovatively, a deep trench with depth and step height of 23.0 μm has also been successfully measured, and slope angle of the sidewall can approximately achieve 67°. The probe can continuously climb the high step and exploring the trench bottom without tip crashing. The new STM could perform long range measurement for the deep trench and high step surfaces without image distortion. It enables accurate measurement and quality control of periodical trench microstructures.

  4. SeqMule: automated pipeline for analysis of human exome/genome sequencing data.

    PubMed

    Guo, Yunfei; Ding, Xiaolei; Shen, Yufeng; Lyon, Gholson J; Wang, Kai

    2015-09-18

    Next-generation sequencing (NGS) technology has greatly helped us identify disease-contributory variants for Mendelian diseases. However, users are often faced with issues such as software compatibility, complicated configuration, and no access to high-performance computing facility. Discrepancies exist among aligners and variant callers. We developed a computational pipeline, SeqMule, to perform automated variant calling from NGS data on human genomes and exomes. SeqMule integrates computational-cluster-free parallelization capability built on top of the variant callers, and facilitates normalization/intersection of variant calls to generate consensus set with high confidence. SeqMule integrates 5 alignment tools, 5 variant calling algorithms and accepts various combinations all by one-line command, therefore allowing highly flexible yet fully automated variant calling. In a modern machine (2 Intel Xeon X5650 CPUs, 48 GB memory), when fast turn-around is needed, SeqMule generates annotated VCF files in a day from a 30X whole-genome sequencing data set; when more accurate calling is needed, SeqMule generates consensus call set that improves over single callers, as measured by both Mendelian error rate and consistency. SeqMule supports Sun Grid Engine for parallel processing, offers turn-key solution for deployment on Amazon Web Services, allows quality check, Mendelian error check, consistency evaluation, HTML-based reports. SeqMule is available at http://seqmule.openbioinformatics.org.

  5. A Multi-Functional Microelectrode Array Featuring 59760 Electrodes, 2048 Electrophysiology Channels, Stimulation, Impedance Measurement and Neurotransmitter Detection Channels.

    PubMed

    Dragas, Jelena; Viswam, Vijay; Shadmani, Amir; Chen, Yihui; Bounik, Raziyeh; Stettler, Alexander; Radivojevic, Milos; Geissler, Sydney; Obien, Marie; Müller, Jan; Hierlemann, Andreas

    2017-06-01

    Biological cells are characterized by highly complex phenomena and processes that are, to a great extent, interdependent. To gain detailed insights, devices designed to study cellular phenomena need to enable tracking and manipulation of multiple cell parameters in parallel; they have to provide high signal quality and high spatiotemporal resolution. To this end, we have developed a CMOS-based microelectrode array system that integrates six measurement and stimulation functions, the largest number to date. Moreover, the system features the largest active electrode array area to date (4.48×2.43 mm 2 ) to accommodate 59,760 electrodes, while its power consumption, noise characteristics, and spatial resolution (13.5 μm electrode pitch) are comparable to the best state-of-the-art devices. The system includes: 2,048 action-potential (AP, bandwidth: 300 Hz to 10 kHz) recording units, 32 local-field-potential (LFP, bandwidth: 1 Hz to 300 Hz) recording units, 32 current recording units, 32 impedance measurement units, and 28 neurotransmitter detection units, in addition to the 16 dual-mode voltage-only or current/voltage-controlled stimulation units. The electrode array architecture is based on a switch matrix, which allows for connecting any measurement/stimulation unit to any electrode in the array and for performing different measurement/stimulation functions in parallel.

  6. Evaluation of the peri-implant bone around parallel-walled dental implants with a condensing thread macrodesign and a self-tapping apex: a 10-year retrospective histological analysis.

    PubMed

    Degidi, Marco; Perrotti, Vittoria; Shibli, Jamil A; Mortellaro, Carmen; Piattelli, Adriano; Iezzi, Giovanna

    2014-05-01

    The long-term high percentages of survival and success of dental implants reported in the literature are related mainly to new, innovative implant and thread designs, and new implant surfaces that allow to obtain very good primary and secondary stability in most anatomical and clinical situations, even in low quality and quantity of bone, promoting a more rapid osseointegration. The aim of this retrospective study was a histological and histomorphometrical evaluation of the bone response around implants with a parallel-wall configuration, condensing thread macrodesign, and self-tapping apex, retrieved from man for different causes. A total of 10 implants were reported in the present study, and these implants had been retrieved after a loading period comprised between a few weeks to about 8 years. Mineralized newly formed bone was found at the interface of all the implants, in direct contact with the implant surface, with no gaps or connective fibrous tissue. This bone adapted very well to the microirregularities of the implant surface. Areas of bone remodeling were present in some regions of the interface, with many reversal lines. High bone-implant contact percentages were found. In conclusion, both the macrostructure and the microstructure of this specific type of implant could be very helpful in the long-term high survival and success implant percentages.

  7. The artificial retina for track reconstruction at the LHC crossing rate

    NASA Astrophysics Data System (ADS)

    Abba, A.; Bedeschi, F.; Citterio, M.; Caponio, F.; Cusimano, A.; Geraci, A.; Marino, P.; Morello, M. J.; Neri, N.; Punzi, G.; Piucci, A.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.

    2016-04-01

    We present the results of an R&D study for a specialized processor capable of precisely reconstructing events with hundreds of charged-particle tracks in pixel and silicon strip detectors at 40 MHz, thus suitable for processing LHC events at the full crossing frequency. For this purpose we design and test a massively parallel pattern-recognition algorithm, inspired to the current understanding of the mechanisms adopted by the primary visual cortex of mammals in the early stages of visual-information processing. The detailed geometry and charged-particle's activity of a large tracking detector are simulated and used to assess the performance of the artificial retina algorithm. We find that high-quality tracking in large detectors is possible with sub-microsecond latencies when the algorithm is implemented in modern, high-speed, high-bandwidth FPGA devices.

  8. Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.

    PubMed

    Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S

    2017-11-01

    To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  9. High Performance Computing at NASA

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Cooper, D. M. (Technical Monitor)

    1994-01-01

    The speaker will give an overview of high performance computing in the U.S. in general and within NASA in particular, including a description of the recently signed NASA-IBM cooperative agreement. The latest performance figures of various parallel systems on the NAS Parallel Benchmarks will be presented. The speaker was one of the authors of the NAS (National Aerospace Standards) Parallel Benchmarks, which are now widely cited in the industry as a measure of sustained performance on realistic high-end scientific applications. It will be shown that significant progress has been made by the highly parallel supercomputer industry during the past year or so, with several new systems, based on high-performance RISC processors, that now deliver superior performance per dollar compared to conventional supercomputers. Various pitfalls in reporting performance will be discussed. The speaker will then conclude by assessing the general state of the high performance computing field.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aceves, Salvador M.; Ledesma-Orozco, Elias Rigoberto; Espinosa-Loza, Francisco

    A pressure vessel apparatus for cryogenic capable storage of hydrogen or other cryogenic gases at high pressure includes an insert with a parallel inlet duct, a perpendicular inlet duct connected to the parallel inlet. The perpendicular inlet duct and the parallel inlet duct connect the interior cavity with the external components. The insert also includes a parallel outlet duct and a perpendicular outlet duct connected to the parallel outlet duct. The perpendicular outlet duct and the parallel outlet duct connect the interior cavity with the external components.

  11. Quality Regulation in Expansion of Educational Systems: A Case of Privately Sponsored Students' Programme in Kenya's Public Universities

    ERIC Educational Resources Information Center

    Yego, Helen J. C.

    2016-01-01

    This paper examines the expansion and management of quality of parallel programmes in Kenya's public universities. The study is based on Privately Sponsored Students Programmes (PSSP) at Moi University and its satellite campuses in Kenya. The study was descriptive in nature and adopted an ex-post facto research design. The study sample consisted…

  12. Sex, Arts and Verbal Abilities: Three Further Indicators of How American Life Is Not Improving

    ERIC Educational Resources Information Center

    Robinson, John P.

    2010-01-01

    Despite clear evidence that Americans' economic standard of living has improved over the last half-century in terms of income, ownership of technology and housing among other indicators, there is scant evidence from non-economic quality-of-life (QOL) indicators of improved life quality to parallel these economic gains. The present article adds to…

  13. Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.

    PubMed

    Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E

    2016-01-19

    Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.

  14. Parallel Computation and Visualization of Three-dimensional, Time-dependent, Thermal Convective Flows

    NASA Technical Reports Server (NTRS)

    Wang, P.; Li, P.

    1998-01-01

    A high-resolution numerical study on parallel systems is reported on three-dimensional, time-dependent, thermal convective flows. A parallel implentation on the finite volume method with a multigrid scheme is discussed, and a parallel visualization systemm is developed on distributed systems for visualizing the flow.

  15. Localized high-resolution DTI of the human midbrain using single-shot EPI, parallel imaging, and outer-volume suppression at 7 T

    PubMed Central

    Wargo, Christopher J.; Gore, John C.

    2013-01-01

    Localized high-resolution diffusion tensor images (DTI) from the midbrain were obtained using reduced field-of-view (rFOV) methods combined with SENSE parallel imaging and single-shot echo planar (EPI) acquisitions at 7 T. This combination aimed to diminish sensitivities of DTI to motion, susceptibility variations, and EPI artifacts at ultra-high field. Outer-volume suppression (OVS) was applied in DTI acquisitions at 2- and 1-mm2 resolutions, b=1000 s/mm2, and six diffusion directions, resulting in scans of 7- and 14-min durations. Mean apparent diffusion coefficient (ADC) and fractional anisotropy (FA) values were measured in various fiber tract locations at the two resolutions and compared. Geometric distortion and signal-to-noise ratio (SNR) were additionally measured and compared for reduced-FOV and full-FOV DTI scans. Up to an eight-fold data reduction was achieved using DTI-OVS with SENSE at 1 mm2, and geometric distortion was halved. The localization of fiber tracts was improved, enabling targeted FA and ADC measurements. Significant differences in diffusion properties were observed between resolutions for a number of regions suggesting that FA values are impacted by partial volume effects even at a 2-mm2 resolution. The combined SENSE DTI-OVS approach allows large reductions in DTI data acquisition and provides improved quality for high-resolution diffusion studies of the human brain. PMID:23541390

  16. Edge electrospinning: a facile needle-less approach to realize scaled up production of quality nanofibers

    NASA Astrophysics Data System (ADS)

    Bochinski, J. R.; Curtis, C.; Roman, M. P.; Clarke, L. I.; Wang, Q.; Thoppey, N. M.; Gorga, R. E.

    2014-03-01

    Utilizing unconfined polymer fluids (e.g., from solution or melt), edge electrospinning provides a straightforward approach for scaled up production of high quality nanofibers through the formation of many parallel jets. From simple geometries (using solution contained within a sharp-edged bowl or on a flat plate), jets form and spontaneously re-arrange on the fluid surface near the edge. Using appropriate control of the electric field induced feed rate, comparable per jet fabrication as traditional single-needle electrospinning can be realized, resulting in nanofibers with similar diameters, diameter distribution, and collected mat porosity. The presence of multiple jets proportionally enhances the production rate of the system, with minimal experimental complexity and without the possibility of clogging. Extending this needle-less approach to commercial polyethylene polymers, micron scale fibers can be melt electrospun using a similar apparatus. Support from National Science Foundation (CMMI-0800237).

  17. Feasibility analysis of marine ecological on-line integrated monitoring system

    NASA Astrophysics Data System (ADS)

    Chu, D. Z.; Cao, X.; Zhang, S. W.; Wu, N.; Ma, R.; Zhang, L.; Cao, L.

    2017-08-01

    The in-situ water quality sensors were susceptible to biological attachment. Moreover, sea water corrosion and wave impact damage, and many sensors scattered distribution would cause maintenance inconvenience. The paper proposed a highly integrated marine ecological on-line integrated monitoring system, which can be used inside monitoring station. All sensors were reasonably classified, the similar in series, the overall in parallel. The system composition and workflow were described. In addition, the paper proposed attention issues of the system design and corresponding solutions. Water quality multi-parameters and 5 nutrient salts as the verification index, in-situ and systematic data comparison experiment were carried out. The results showed that the data consistency of nutrient salt, PH and salinity was better. Temperature and dissolved oxygen data trend was consistent, but the data had deviation. Turbidity fluctuated greatly; the chlorophyll trend was similar with it. Aiming at the above phenomena, three points system optimization direction were proposed.

  18. Biofilm formation and control in a simulated spacecraft water system - Two-year results

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Taylor, Robert D.; Flanagan, David T.; Carr, Sandra E.; Bruce, Rebekah J.; Svoboda, Judy V.; Huls, M. H.; Sauer, Richard L.; Pierson, Duane L.

    1991-01-01

    The ability of iodine to maintain microbial water quality in a simulated spacecraft water system is being studied. An iodine level of about 2.0 mg/L is maintained by passing ultrapure influent water through an iodinated ion exchange resin. Six liters are withdrawn daily and the chemical and microbial quality of the water is monitored regularly. Stainless steel coupons used to monitor biofilm formation are being analyzed by culture methods, epifluorescence microscopy, and scanning electron microscopy. Results from the first two years of operation show a single episode of high bacterial colony counts in the iodinated system. This growth was apparently controlled by replacing the iodinated ion exchange resin. Scanning electron microscopy indicates that the iodine has limited but not completely eliminated the formation of biofilm during the first two years of operation. Significant microbial contamination has been present continuously in a parallel noniodinated system since the third week of operation.

  19. The paradigm compiler: Mapping a functional language for the connection machine

    NASA Technical Reports Server (NTRS)

    Dennis, Jack B.

    1989-01-01

    The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.

  20. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  1. A parallel architecture of interpolated timing recovery for high- speed data transfer rate and wide capture-range

    NASA Astrophysics Data System (ADS)

    Higashino, Satoru; Kobayashi, Shoei; Yamagami, Tamotsu

    2007-06-01

    High data transfer rate has been demanded for data storage devices along increasing the storage capacity. In order to increase the transfer rate, high-speed data processing techniques in read-channel devices are required. Generally, parallel architecture is utilized for the high-speed digital processing. We have developed a new architecture of Interpolated Timing Recovery (ITR) to achieve high-speed data transfer rate and wide capture-range in read-channel devices for the information storage channels. It facilitates the parallel implementation on large-scale-integration (LSI) devices.

  2. DVS-SOFTWARE: An Effective Tool for Applying Highly Parallelized Hardware To Computational Geophysics

    NASA Astrophysics Data System (ADS)

    Herrera, I.; Herrera, G. S.

    2015-12-01

    Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  3. Method and apparatus for fabrication of high gradient insulators with parallel surface conductors spaced less than one millimeter apart

    DOEpatents

    Sanders, David M.; Decker, Derek E.

    1999-01-01

    Optical patterns and lithographic techniques are used as part of a process to embed parallel and evenly spaced conductors in the non-planar surfaces of an insulator to produce high gradient insulators. The approach extends the size that high gradient insulating structures can be fabricated as well as improves the performance of those insulators by reducing the scale of the alternating parallel lines of insulator and conductor along the surface. This fabrication approach also substantially decreases the cost required to produce high gradient insulators.

  4. Comparative Transcriptomic Analysis in Paddy Rice under Storage and Identification of Differentially Regulated Genes in Response to High Temperature and Humidity.

    PubMed

    Zhao, Chanjuan; Xie, Junqi; Li, Li; Cao, Chongjiang

    2017-09-20

    The transcriptomes of paddy rice in response to high temperature and humidity were studied using a high-throughput RNA sequencing approach. Effects of high temperature and humidity on the sucrose and starch contents and α/β-amylase activity were also investigated. Results showed that 6876 differentially expressed genes (DEGs) were identified in paddy rice under high temperature and humidity storage. Importantly, 12 DEGs that were downregulated fell into the "starch and sucrose pathway". The quantitative real-time polymerase chain reaction assays indicated that expression of these 12 DEGs was significantly decreased, which was in parallel with the reduced level of enzyme activities and the contents of sucrose and starch in paddy rice stored at high temperature and humidity conditions compared to the control group. Taken together, high temperature and humidity influence the quality of paddy rice at least partially by downregulating the expression of genes encoding sucrose transferases and hydrolases, which might result in the decrease of starch and sucrose contents.

  5. Development of electromagnetic welding facility of flat plates for nuclear industry

    NASA Astrophysics Data System (ADS)

    Kumar, Rajesh; Sahoo, Subhanarayan; Sarkar, Biswanath; Shyam, Anurag

    2017-04-01

    Electromagnetic pulse welding (EMPW) process, one of high speed welding process uses electromagnetic force from discharged current through working coil, which develops a repulsive force between the induced current flowing parallel and in opposite direction. For achieving the successful weldment using this process the design of working coil is the most important factor due to high magnetic field on surface of work piece. In case of high quality flat plate welding factors such as impact velocity, angle of impact standoff distance, thickness of flyer and overlap length have to be chosen carefully. EMPW has wide applications in nuclear industry, automotive industry, aerospace, electrical industries. However formability and weldability still remain major issues. Due to ease in controlling the magnetic field enveloped inside tubes, the EMPW has been widely used for tube welding. In case of flat components control of magnetic field is difficult. Hence the application of EMPW gets restricted. The present work attempts to make a novel contribution by investigating the effect of process parameters on welding quality of flat plates. The work emphasizes the approaches and engineering calculations required to effectively use of actuator in EMPW of flat components.

  6. Development of gallium arsenide high-speed, low-power serial parallel interface modules: Executive summary

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Final report to NASA LeRC on the development of gallium arsenide (GaAS) high-speed, low power serial/parallel interface modules. The report discusses the development and test of a family of 16, 32 and 64 bit parallel to serial and serial to parallel integrated circuits using a self aligned gate MESFET technology developed at the Honeywell Sensors and Signal Processing Laboratory. Lab testing demonstrated 1.3 GHz clock rates at a power of 300 mW. This work was accomplished under contract number NAS3-24676.

  7. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.

    1991-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  8. High-energy physics software parallelization using database techniques

    NASA Astrophysics Data System (ADS)

    Argante, E.; van der Stok, P. D. V.; Willers, I.

    1997-02-01

    A programming model for software parallelization, called CoCa, is introduced that copes with problems caused by typical features of high-energy physics software. By basing CoCa on the database transaction paradimg, the complexity induced by the parallelization is for a large part transparent to the programmer, resulting in a higher level of abstraction than the native message passing software. CoCa is implemented on a Meiko CS-2 and on a SUN SPARCcenter 2000 parallel computer. On the CS-2, the performance is comparable with the performance of native PVM and MPI.

  9. Seismic anisotropy and mantle flow below subducting slabs

    NASA Astrophysics Data System (ADS)

    Walpole, Jack; Wookey, James; Kendall, J.-Michael; Masters, T.-Guy

    2017-05-01

    Subduction is integral to mantle convection and plate tectonics, yet the role of the subslab mantle in this process is poorly understood. Some propose that decoupling from the slab permits widespread trench parallel flow in the subslab mantle, although the geodynamical feasibility of this has been questioned. Here, we use the source-side shear wave splitting technique to probe anisotropy beneath subducting slabs, enabling us to test petrofabric models and constrain the geometry of mantle fow. Our global dataset contains 6369 high quality measurements - spanning ∼ 40 , 000 km of subduction zone trenches - over the complete range of available source depths (4 to 687 km) - and a large range of angles in the slab reference frame. We find that anisotropy in the subslab mantle is well characterised by tilted transverse isotropy with a slow-symmetry-axis pointing normal to the plane of the slab. This appears incompatible with purely trench-parallel flow models. On the other hand it is compatible with the idea that the asthenosphere is tilted and entrained during subduction. Trench parallel measurements are most commonly associated with shallow events (source depth < 50 km) - suggesting a separate region of anisotropy in the lithospheric slab. This may correspond to the shape preferred orientation of cracks, fractures, and faults opened by slab bending. Meanwhile the deepest events probe the upper lower mantle where splitting is found to be consistent with deformed bridgmanite.

  10. K-space reconstruction with anisotropic kernel support (KARAOKE) for ultrafast partially parallel imaging.

    PubMed

    Miao, Jun; Wong, Wilbur C K; Narayan, Sreenath; Wilson, David L

    2011-11-01

    Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (N(C)). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = N(C). K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B(1) inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support ("KARAOKE") algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding N(C). KARAOKE performed comparably to GRAPPA at low Rs. As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and/or high field strength.

  11. K-space reconstruction with anisotropic kernel support (KARAOKE) for ultrafast partially parallel imaging

    PubMed Central

    Miao, Jun; Wong, Wilbur C. K.; Narayan, Sreenath; Wilson, David L.

    2011-01-01

    Purpose: Partially parallel imaging (PPI) greatly accelerates MR imaging by using surface coil arrays and under-sampling k-space. However, the reduction factor (R) in PPI is theoretically constrained by the number of coils (NC). A symmetrically shaped kernel is typically used, but this often prevents even the theoretically possible R from being achieved. Here, the authors propose a kernel design method to accelerate PPI faster than R = NC. Methods: K-space data demonstrates an anisotropic pattern that is correlated with the object itself and to the asymmetry of the coil sensitivity profile, which is caused by coil placement and B1 inhomogeneity. From spatial analysis theory, reconstruction of such pattern is best achieved by a signal-dependent anisotropic shape kernel. As a result, the authors propose the use of asymmetric kernels to improve k-space reconstruction. The authors fit a bivariate Gaussian function to the local signal magnitude of each coil, then threshold this function to extract the kernel elements. A perceptual difference model (Case-PDM) was employed to quantitatively evaluate image quality. Results: A MR phantom experiment showed that k-space anisotropy increased as a function of magnetic field strength. The authors tested a K-spAce Reconstruction with AnisOtropic KErnel support (“KARAOKE”) algorithm with both MR phantom and in vivo data sets, and compared the reconstructions to those produced by GRAPPA, a popular PPI reconstruction method. By exploiting k-space anisotropy, KARAOKE was able to better preserve edges, which is particularly useful for cardiac imaging and motion correction, while GRAPPA failed at a high R near or exceeding NC. KARAOKE performed comparably to GRAPPA at low Rs. Conclusions: As a rule of thumb, KARAOKE reconstruction should always be used for higher quality k-space reconstruction, particularly when PPI data is acquired at high Rs and∕or high field strength. PMID:22047378

  12. Design and performance evaluation of a new high energy parallel hole collimator for radioiodine planar imaging by gamma cameras: Monte Carlo simulation study.

    PubMed

    Moslemi, Vahid; Ashoor, Mansour

    2017-05-01

    In addition to the trade-off between resolution and sensitivity which is a common problem among all types of parallel hole collimators (PCs), obtained images by high energy PCs (HEPCs) suffer from hole-pattern artifact (HPA) due to further septa thickness. In this study, a new design on the collimator has been proposed to improve the trade-off between resolution and sensitivity and to eliminate the HPA. A novel PC, namely high energy extended PC (HEEPC), is proposed and is compared to HEPCs. In the new PC, trapezoidal denticles were added upon the septa in the detector side. The performance of the HEEPCs were evaluated and compared to that of HEPCs using a Monte Carlo-N-particle version5 (MCNP5) simulation. The point spread functions (PSF) of HEPCs and HEEPCs were obtained as well as the various parameters such as resolution, sensitivity, scattering, and penetration ratios, and the HPA of the collimators was assessed. Furthermore, a Picker phantom study was performed to examine the effects of the collimators on the quality of planar images. It was found that the HEEPC D with an identical resolution to that of HEPC C increased sensitivity by 34.7%, and it improved the trade-off between resolution and sensitivity as well as to eliminate the HPA. In the picker phantom study, the HEEPC D indicated the hot and cold lesions with the higher contrast, lower noise, and higher contrast to noise ratio (CNR). Since the HEEPCs modify the shaping of PSFs, they are able to improve the trade-off between the resolution and sensitivity; consequently, planar images can be achieved with higher contrast resolutions. Furthermore, because the HEEPC S reduce the HPA and produce images with a higher CNR, compared to HEPCs, the obtained images by HEEPCs have a higher quality, which can help physicians to provide better diagnosis.

  13. An Overview of High-performance Parallel Big Data transfers over multiple network channels with Transport Layer Security (TLS) and TLS plus Perfect Forward Secrecy (PFS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Corttrell, R. A.

    This Technical Note provides an overview of high-performance parallel Big Data transfers with and without encryption for data in-transit over multiple network channels. It shows that with the parallel approach, it is feasible to carry out high-performance parallel "encrypted" Big Data transfers without serious impact to throughput. But other impacts, e.g. the energy-consumption part should be investigated. It also explains our rationales of using a statistics-based approach for gaining understanding from test results and for improving the system. The presentation is of high-level nature. Nevertheless, at the end we will pose some questions and identify potentially fruitful directions for futuremore » work.« less

  14. Rainfall effects on inflow and infiltration in wastewater treatment systems in a coastal plain region.

    PubMed

    Cahoon, Lawrence B; Hanke, Marc H

    2017-04-01

    Aging wastewater collection and treatment systems have not received as much attention as other forms of infrastructure, even though they are vital to public health, economic growth, and environmental quality. Inflow and infiltration (I&I) are among potentially widespread problems facing central sewage collection and treatment systems, posing risks of sanitary system overflows (SSOs), system degradation, and water quality impairment, but remain poorly quantified. Whole-system analyses of I&I were conducted by regression analyses of system flow responses to rainfall and temperature for 93 wastewater treatment plants in 23 counties in eastern North Carolina, USA, a coastal plain region with high water tables and generally higher rainfalls than the continental interior. Statistically significant flow responses to rainfall were found in 92% of these systems, with 2-year average I&I values exceeding 10% of rainless system flow in over 40% of them. The effects of rainfall, which can be intense in this coastal region, have region-wide implications for sewer system performance and environmental management. The positive association between rainfall and excessive I&I parallels the effects of storm water runoff on water quality, in that excessive I&I can also drive SSOs, thus confounding water quality protection efforts.

  15. Diffusion tensor imaging (DTI) with retrospective motion correction for large-scale pediatric imaging.

    PubMed

    Holdsworth, Samantha J; Aksoy, Murat; Newbould, Rexford D; Yeom, Kristen; Van, Anh T; Ooi, Melvyn B; Barnes, Patrick D; Bammer, Roland; Skare, Stefan

    2012-10-01

    To develop and implement a clinical DTI technique suitable for the pediatric setting that retrospectively corrects for large motion without the need for rescanning and/or reacquisition strategies, and to deliver high-quality DTI images (both in the presence and absence of large motion) using procedures that reduce image noise and artifacts. We implemented an in-house built generalized autocalibrating partially parallel acquisitions (GRAPPA)-accelerated diffusion tensor (DT) echo-planar imaging (EPI) sequence at 1.5T and 3T on 1600 patients between 1 month and 18 years old. To reconstruct the data, we developed a fully automated tailored reconstruction software that selects the best GRAPPA and ghost calibration weights; does 3D rigid-body realignment with importance weighting; and employs phase correction and complex averaging to lower Rician noise and reduce phase artifacts. For select cases we investigated the use of an additional volume rejection criterion and b-matrix correction for large motion. The DTI image reconstruction procedures developed here were extremely robust in correcting for motion, failing on only three subjects, while providing the radiologists high-quality data for routine evaluation. This work suggests that, apart from the rare instance of continuous motion throughout the scan, high-quality DTI brain data can be acquired using our proposed integrated sequence and reconstruction that uses a retrospective approach to motion correction. In addition, we demonstrate a substantial improvement in overall image quality by combining phase correction with complex averaging, which reduces the Rician noise that biases noisy data. Copyright © 2012 Wiley Periodicals, Inc.

  16. Acceleration of aircraft-level Traffic Flow Management

    NASA Astrophysics Data System (ADS)

    Rios, Joseph Lucio

    This dissertation describes novel approaches to solving large-scale, high fidelity, aircraft-level Traffic Flow Management scheduling problems. Depending on the methods employed, solving these problems to optimality can take longer than the length of the planning horizon in question. Research in this domain typically focuses on the quality of the modeling used to describe the problem and the benefits achieved from the optimized solution, often treating computational aspects as secondary or tertiary. The work presented here takes the complementary view and considers the computational aspect as the primary concern. To this end, a previously published model for solving this Traffic Flow Management scheduling problem is used as starting point for this study. The model proposed by Bertsimas and Stock-Patterson is a binary integer program taking into account all major resource capacities and the trajectories of each flight to decide which flights should be held in which resource for what amount of time in order to satisfy all capacity requirements. For large instances, the solve time using state-of-the-art solvers is prohibitive for use within a potential decision support tool. With this dissertation, however, it will be shown that solving can be achieved in reasonable time for instances of real-world size. Five other techniques developed and tested for this dissertation will be described in detail. These are heuristic methods that provide good results. Performance is measured in terms of runtime and "optimality gap." We then describe the most successful method presented in this dissertation: Dantzig-Wolfe Decomposition. Results indicate that a parallel implementation of Dantzig-Wolfe Decomposition optimally solves the original problem in much reduced time and with better integrality and smaller optimality gap than any of the heuristic methods or state-of-the-art, commercial solvers. The solution quality improves in every measureable way as the number of subproblems solved in parallel increases. A maximal decomposition provides the best results of any method tested. The convergence qualities of Dantzig-Wolfe Decomposition have been criticized in the past, so we examine what makes the Bertsimas-Stock Patterson model so amenable to use of this method. These mathematical qualities of the model are generalized to provide guidance on other problems that may benefit from massively parallel Dantzig-Wolfe Decomposition. This result, together with the development of the software, and the experimental results indicating the feasibility of real-time, nationwide Traffic Flow Management scheduling represent the major contributions of this dissertation.

  17. Design and development of polyphenylene oxide foam as a reusable internal insulation for LH2 tanks

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Material specification and fabrication process procedures for foam production are presented. The properties of mechanical strength, modulus of elasticity, density and thermal conductivity were measured and related to foam quality. Properties unique to the foam such as a gas layer insulation, density gradient parallel to the fiber direction, and gas flow conductance in both directions were correlated with foam quality. Inspection and quality control tests procedures are outlined and photographs of test equipment and test specimens are shown.

  18. Performance-related test for asphalt emulsions.

    DOT National Transportation Integrated Search

    2004-10-01

    Yield stress was investigated as a potential quality control parameter for asphalt emulsions. Viscometric data were determined using the concentric cylinder, parallel plate, and cone and plate geometries with rotational rheometers. We also investigat...

  19. Jan Potocki et le "Gothic Novel"

    ERIC Educational Resources Information Center

    Finne, Jacques

    1970-01-01

    Establishes a parallel between the supernatural and fantastic qualities of Le Comte Jan Potocki's literary works, and the English gothic novels by comparing the elements of terror, mysterious atmosphere, and the supernatural beings involved. (DS)

  20. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  1. Local search to improve coordinate-based task mapping

    DOE PAGES

    Balzuweit, Evan; Bunde, David P.; Leung, Vitus J.; ...

    2015-10-31

    We present a local search strategy to improve the coordinate-based mapping of a parallel job’s tasks to the MPI ranks of its parallel allocation in order to reduce network congestion and the job’s communication time. The goal is to reduce the number of network hops between communicating pairs of ranks. Our target is applications with a nearest-neighbor stencil communication pattern running on mesh systems with non-contiguous processor allocation, such as Cray XE and XK Systems. Utilizing the miniGhost mini-app, which models the shock physics application CTH, we demonstrate that our strategy reduces application running time while also reducing the runtimemore » variability. Furthermore, we further show that mapping quality can vary based on the selected allocation algorithm, even between allocation algorithms of similar apparent quality.« less

  2. Accelerated acquisition of tagged MRI for cardiac motion correction in simultaneous PET-MR: Phantom and patient studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Chuan, E-mail: chuan.huang@stonybrookmedicine.edu; Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115; Departments of Radiology, Psychiatry, Stony Brook Medicine, Stony Brook, New York 11794

    2015-02-15

    Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PETmore » using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide relatively accurate motion fields and yield tMR-based motion corrected PET images with similar image quality as those reconstructed using fully sampled tMR data. The reduction of tMR acquisition time makes it more compatible with routine clinical cardiac PET-MR studies.« less

  3. Automatic selection of dynamic data partitioning schemes for distributed memory multicomputers

    NASA Technical Reports Server (NTRS)

    Palermo, Daniel J.; Banerjee, Prithviraj

    1995-01-01

    For distributed memory multicomputers such as the Intel Paragon, the IBM SP-2, the NCUBE/2, and the Thinking Machines CM-5, the quality of the data partitioning for a given application is crucial to obtaining high performance. This task has traditionally been the user's responsibility, but in recent years much effort has been directed to automating the selection of data partitioning schemes. Several researchers have proposed systems that are able to produce data distributions that remain in effect for the entire execution of an application. For complex programs, however, such static data distributions may be insufficient to obtain acceptable performance. The selection of distributions that dynamically change over the course of a program's execution adds another dimension to the data partitioning problem. In this paper, we present a technique that can be used to automatically determine which partitionings are most beneficial over specific sections of a program while taking into account the added overhead of performing redistribution. This system is being built as part of the PARADIGM (PARAllelizing compiler for DIstributed memory General-purpose Multicomputers) project at the University of Illinois. The complete system will provide a fully automated means to parallelize programs written in a serial programming model obtaining high performance on a wide range of distributed-memory multicomputers.

  4. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    NASA Astrophysics Data System (ADS)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  5. Filterless frequency 12-tupling optical millimeter-wave generation using two cascaded dual-parallel Mach-Zehnder modulators.

    PubMed

    Zhu, Zihang; Zhao, Shanghong; Zheng, Wanze; Wang, Wei; Lin, Baoqin

    2015-11-10

    A novel frequency 12-tupling optical millimeter-wave (mm-wave) generation using two cascaded dual-parallel Mach-Zehnder modulators (DP-MZMs) without an optical filter is proposed and demonstrated by computer simulation. By properly adjusting the amplitude and phase of radio frequency (RF) driving signal and the direct current (DC) bias points of two DP-MZMs, a 120 GHz mm-wave with an optical sideband suppression ratio (OSSR) of 25.1 dB and a radio frequency spurious suppression ratio (RFSSR) of 19.1 dB is shown to be generated from a 10 GHz RF driving signal, which largely reduces the response frequency of electronic devices. Furthermore, it is also proved to be valid that even if the phase difference of RF driving signals, the RF driving voltage, and the DC bias voltage deviate from the ideal values to a certain degree, the performance is still acceptable. Since no optical filter is employed to suppress the undesired optical sidebands, a high-spectral-purity mm-wave signal tunable from 48 to 216 GHz can be obtained theoretically when a RF driving signal from 4 to 18 GHz is applied to the DP-MZMs, and the system can be readily implemented in wavelength-division-multiplexing upconversion systems to provide high-quality optical local oscillator signal.

  6. QuickProbs 2: Towards rapid construction of high-quality alignments of large protein families

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2017-01-01

    The ever-increasing size of sequence databases caused by the development of high throughput sequencing, poses to multiple alignment algorithms one of the greatest challenges yet. As we show, well-established techniques employed for increasing alignment quality, i.e., refinement and consistency, are ineffective when large protein families are investigated. We present QuickProbs 2, an algorithm for multiple sequence alignment. Based on probabilistic models, equipped with novel column-oriented refinement and selective consistency, it offers outstanding accuracy. When analysing hundreds of sequences, Quick-Probs 2 is noticeably better than ClustalΩ and MAFFT, the previous leaders for processing numerous protein families. In the case of smaller sets, for which consistency-based methods are the best performing, QuickProbs 2 is also superior to the competitors. Due to low computational requirements of selective consistency and utilization of massively parallel architectures, presented algorithm has similar execution times to ClustalΩ, and is orders of magnitude faster than full consistency approaches, like MSAProbs or PicXAA. All these make QuickProbs 2 an excellent tool for aligning families ranging from few, to hundreds of proteins. PMID:28139687

  7. Integrated approach for demarcating subsurface pollution and saline water intrusion zones in SIPCOT area: a case study from Cuddalore in Southern India.

    PubMed

    Sankaran, S; Sonkamble, S; Krishnakumar, K; Mondal, N C

    2012-08-01

    This paper deals with a systematic hydrogeological, geophysical, and hydrochemical investigations carried out in SIPCOT area in Southern India to demarcate groundwater pollution and saline intrusion through Uppanar River, which flows parallel to sea coast with high salinity (average TDS 28, 870 mg/l) due to back waters as well as discharge of industrial and domestic effluents. Hydrogeological and geophysical investigations comprising topographic survey, self-potential, multi-electrode resistivity imaging, and water quality monitoring were found the extent of saline water intrusion in the south and pockets of subsurface pollution in the north of the study area. Since the area is beset with highly permeable unconfined quaternary alluvium forming potential aquifer at shallow depth, long-term excessive pumping and influence of the River have led to lowering of the water table and degradation of water quality through increased salinity there by generating reversal of hydraulic gradient in the south. The improper management of industrial wastes and left over chemicals by closed industries has led surface and subsurface pollution in the north of the study area.

  8. Design of a dataway processor for a parallel image signal processing system

    NASA Astrophysics Data System (ADS)

    Nomura, Mitsuru; Fujii, Tetsuro; Ono, Sadayasu

    1995-04-01

    Recently, demands for high-speed signal processing have been increasing especially in the field of image data compression, computer graphics, and medical imaging. To achieve sufficient power for real-time image processing, we have been developing parallel signal-processing systems. This paper describes a communication processor called 'dataway processor' designed for a new scalable parallel signal-processing system. The processor has six high-speed communication links (Dataways), a data-packet routing controller, a RISC CORE, and a DMA controller. Each communication link operates at 8-bit parallel in a full duplex mode at 50 MHz. Moreover, data routing, DMA, and CORE operations are processed in parallel. Therefore, sufficient throughput is available for high-speed digital video signals. The processor is designed in a top- down fashion using a CAD system called 'PARTHENON.' The hardware is fabricated using 0.5-micrometers CMOS technology, and its hardware is about 200 K gates.

  9. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    King, R. B.; Fordyce, J. S.; Neustadter, H. E.; Leibecki, H. F.

    1975-01-01

    Measurements were made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-state) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  10. Elemental composition and size distribution of particulates in Cleveland, Ohio

    NASA Technical Reports Server (NTRS)

    Leibecki, H. F.; King, R. B.; Fordyce, J. S.; Neustadter, H. E.

    1975-01-01

    Measurements have been made of the elemental particle size distribution at five contrasting urban environments with different source-type distributions in Cleveland, Ohio. Air quality conditions ranged from normal to air pollution alert levels. A parallel network of high-volume cascade impactors (5-stage) were used for simultaneous sampling on glass fiber surfaces for mass determinations and on Whatman-41 surfaces for elemental analysis by neutron activation for 25 elements. The elemental data are assessed in terms of distribution functions and interrelationships and are compared between locations as a function of resultant wind direction in an attempt to relate the findings to sources.

  11. Parallel Computing:. Some Activities in High Energy Physics

    NASA Astrophysics Data System (ADS)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  12. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  13. High throughput screening of particle conditioning operations: I. System design and method development.

    PubMed

    Noyes, Aaron; Huffman, Ben; Godavarti, Ranga; Titchener-Hooker, Nigel; Coffman, Jonathan; Sunasara, Khurram; Mukhopadhyay, Tarit

    2015-08-01

    The biotech industry is under increasing pressure to decrease both time to market and development costs. Simultaneously, regulators are expecting increased process understanding. High throughput process development (HTPD) employs small volumes, parallel processing, and high throughput analytics to reduce development costs and speed the development of novel therapeutics. As such, HTPD is increasingly viewed as integral to improving developmental productivity and deepening process understanding. Particle conditioning steps such as precipitation and flocculation may be used to aid the recovery and purification of biological products. In this first part of two articles, we describe an ultra scale-down system (USD) for high throughput particle conditioning (HTPC) composed of off-the-shelf components. The apparatus is comprised of a temperature-controlled microplate with magnetically driven stirrers and integrated with a Tecan liquid handling robot. With this system, 96 individual reaction conditions can be evaluated in parallel, including downstream centrifugal clarification. A comprehensive suite of high throughput analytics enables measurement of product titer, product quality, impurity clearance, clarification efficiency, and particle characterization. HTPC at the 1 mL scale was evaluated with fermentation broth containing a vaccine polysaccharide. The response profile was compared with the Pilot-scale performance of a non-geometrically similar, 3 L reactor. An engineering characterization of the reactors and scale-up context examines theoretical considerations for comparing this USD system with larger scale stirred reactors. In the second paper, we will explore application of this system to industrially relevant vaccines and test different scale-up heuristics. © 2015 Wiley Periodicals, Inc.

  14. Thermal Investigation of Interaction between High-power CW-laser Radiation and a Water-jet

    NASA Astrophysics Data System (ADS)

    Brecher, Christian; Janssen, Henning; Eckert, Markus; Schmidt, Florian

    The technology of a water guided laser beam has been industrially established for micro machining. Pulsed laser radiation is guided via a water jet (diameter: 25-250 μm) using total internal reflection. Due to the cylindrical jet shape the depth of field increases to above 50 mm, enabling parallel kerfs compared to conventional laser systems. However higher material thicknesses and macro geometries cannot be machined economically viable due to low average laser powers. Fraunhofer IPT has successfully combined a high-power continuous-wave (CW) fiber laser (6 kW) and water jet technology. The main challenge of guiding high-power laser radiation in water is the energy transferred to the jet by absorption, decreasing its stability. A model of laser water interaction in the water jet has been developed and validated experimentally. Based on the results an upscaling of system technology to 30 kW is discussed, enabling a high potential in cutting challenging materials at high qualities and high speeds.

  15. Cognitive remediation in schizophrenia: A methodological appraisal of systematic reviews and meta-analyses.

    PubMed

    Bryce, Shayden; Sloan, Elise; Lee, Stuart; Ponsford, Jennie; Rossell, Susan

    2016-04-01

    Systematic reviews and meta-analyses are a primary source of evidence when evaluating the benefit(s) of cognitive remediation (CR) in schizophrenia. These studies are designed to rigorously synthesize scientific literature; however, cannot be assumed to be of high methodological quality. The aims of this report were to: 1) review the use of systematic reviews and meta-analyses regarding CR in schizophrenia; 2) conduct a systematic methodological appraisal of published reports examining the benefits of this intervention on core outcome domains; and 3) compare the correspondence between methodological and reporting quality. Electronic databases were searched for relevant articles. Twenty-one reviews met inclusion criteria and were scored according to the AMSTAR checklist-a validated scale of methodological quality. Five meta-analyses were also scored according to PRISMA statement to compare 'quality of conduct' with 'quality of reporting'. Most systematic reviews and meta-analyses shared strengths and fell within a 'medium' level of methodological quality. Nevertheless, there were consistent areas of potential weakness that were not addressed by most reviews. These included the lack of protocol registration, uncertainty regarding independent data extraction and consensus procedures, and the minimal assessment of publication bias. Moreover, quality of conduct may not necessarily parallel quality of reporting, suggesting that consideration of these methods independently may be important. Reviews concerning CR for schizophrenia are a valuable source of evidence. However, the methodological quality of these reports may require additional consideration. Enhancing quality of conduct is essential for enabling research literature to be interpreted with confidence. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  17. Optical investigation of domain resonances in magnetic garnet films

    NASA Astrophysics Data System (ADS)

    Bahlmann, N.; Gerhardt, R.; Dötsch, H.

    1996-08-01

    Magnetic garnet films of composition (Y,Bi) 3(Fe,Al) 5O 12 are grown by liquid phase epitaxy on [111] oriented substrates of Gd 3Ga 5O 12. Lattices of parallel stripe domains are stabilized by a static induction applied in the film plane. The two branches DR ± of the domain resonance and the domain wall resonance DWR are excited by microwave magnetic fields in the frequency range up to 6 GHz. Light passing the stripe domain lattice parallel to the film normal is modulated at the excitation frequency. A modulation bandwidth of more than 2 GHz is observed. The resonances can be calculated with high accuracy by a hybridization model, if the quality factor Q of the film exceeds 0.5. For Q < 0.5 a simple approximation is used to describe the superposition of the DR + and DR - resonances. The superposition model predicts two stability states of the resonance DR + which are observed experimentally. From the optical measurements precession angles of the resonance DR - of nearly 6° and wall oscillation amplitudes up to 25 nm are derived.

  18. Three-dimensional motion-picture imaging of dynamic object by parallel-phase-shifting digital holographic microscopy using an inverted magnification optical system

    NASA Astrophysics Data System (ADS)

    Fukuda, Takahito; Shinomura, Masato; Xia, Peng; Awatsuji, Yasuhiro; Nishio, Kenzo; Matoba, Osamu

    2017-04-01

    We constructed a parallel-phase-shifting digital holographic microscopy (PPSDHM) system using an inverted magnification optical system, and succeeded in three-dimensional (3D) motion-picture imaging for 3D displacement of a microscopic object. In the PPSDHM system, the inverted and afocal magnification optical system consisted of a microscope objective (16.56 mm focal length and 0.25 numerical aperture) and a convex lens (300 mm focal length and 82 mm aperture diameter). A polarization-imaging camera was used to record multiple phase-shifted holograms with a single-shot exposure. We recorded an alum crystal, sinking down in aqueous solution of alum, by the constructed PPSDHM system at 60 frames/s for about 20 s and reconstructed high-quality 3D motion-picture image of the crystal. Then, we calculated amounts of displacement of the crystal from the amounts in the focus plane and the magnifications of the magnification optical system, and obtained the 3D trajectory of the crystal by that amounts.

  19. Cardiac imaging at 7 Tesla: Single- and two-spoke radiofrequency pulse design with 16-channel parallel excitation.

    PubMed

    Schmitter, Sebastian; DelaBarre, Lance; Wu, Xiaoping; Greiser, Andreas; Wang, Dingxin; Auerbach, Edward J; Vaughan, J Thomas; Uğurbil, Kâmil; Van de Moortele, Pierre-François

    2013-11-01

    Higher signal to noise ratio (SNR) and improved contrast have been demonstrated at ultra-high magnetic fields (≥7 Tesla [T]) in multiple targets, often with multi-channel transmit methods to address the deleterious impact on tissue contrast due to spatial variations in B1 (+) profiles. When imaging the heart at 7T, however, respiratory and cardiac motion, as well as B0 inhomogeneity, greatly increase the methodological challenge. In this study we compare two-spoke parallel transmit (pTX) RF pulses with static B1 (+) shimming in cardiac imaging at 7T. Using a 16-channel pTX system, slice-selective two-spoke pTX pulses and static B1 (+) shimming were applied in cardiac CINE imaging. B1 (+) and B0 mapping required modified cardiac triggered sequences. Excitation homogeneity and RF energy were compared in different imaging orientations. Two-spoke pulses provide higher excitation homogeneity than B1 (+) shimming, especially in the more challenging posterior region of the heart. The peak value of channel-wise RF energy was reduced, allowing for a higher flip angle, hence increased tissue contrast. Image quality with two-spoke excitation proved to be stable throughout the entire cardiac cycle. Two-spoke pTX excitation has been successfully demonstrated in the human heart at 7T, with improved image quality and reduced RF pulse energy when compared with B1 (+) shimming. Copyright © 2013 Wiley Periodicals, Inc.

  20. Reduction of product-related species during the fermentation and purification of a recombinant IL-1 receptor antagonist at the laboratory and pilot scale.

    PubMed

    Schirmer, Emily B; Golden, Kathryn; Xu, Jin; Milling, Jesse; Murillo, Alec; Lowden, Patricia; Mulagapati, Srihariraju; Hou, Jinzhao; Kovalchin, Joseph T; Masci, Allyson; Collins, Kathryn; Zarbis-Papastoitsis, Gregory

    2013-08-01

    Through a parallel approach of tracking product quality through fermentation and purification development, a robust process was designed to reduce the levels of product-related species. Three biochemically similar product-related species were identified as byproducts of host-cell enzymatic activity. To modulate intracellular proteolytic activity, key fermentation parameters (temperature, pH, trace metals, EDTA levels, and carbon source) were evaluated through bioreactor optimization, while balancing negative effects on growth, productivity, and oxygen demand. The purification process was based on three non-affinity steps and resolved product-related species by exploiting small charge differences. Using statistical design of experiments for elution conditions, a high-resolution cation exchange capture column was optimized for resolution and recovery. Further reduction of product-related species was achieved by evaluating a matrix of conditions for a ceramic hydroxyapatite column. The optimized fermentation process was transferred from the 2-L laboratory scale to the 100-L pilot scale and the purification process was scaled accordingly to process the fermentation harvest. The laboratory- and pilot-scale processes resulted in similar process recoveries of 60 and 65%, respectively, and in a product that was of equal quality and purity to that of small-scale development preparations. The parallel approach for up- and downstream development was paramount in achieving a robust and scalable clinical process. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Progress on complementary patterning using plasmon-excited electron beamlets (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Du, Zhidong; Chen, Chen; Pan, Liang

    2017-04-01

    Maskless lithography using parallel electron beamlets is a promising solution for next generation scalable maskless nanolithography. Researchers have focused on this goal but have been unable to find a robust technology to generate and control high-quality electron beamlets with satisfactory brightness and uniformity. In this work, we will aim to address this challenge by developing a revolutionary surface-plasmon-enhanced-photoemission (SPEP) technology to generate massively-parallel electron beamlets for maskless nanolithography. The new technology is built upon our recent breakthroughs in plasmonic lenses, which will be used to excite and focus surface plasmons to generate massively-parallel electron beamlets through photoemission. Specifically, the proposed SPEP device consists of an array of plasmonic lens and electrostatic micro-lens pairs, each pair independently producing an electron beamlet. During lithography, a spatial optical modulator will dynamically project light onto individual plasmonic lenses to control the switching and brightness of electron beamlets. The photons incident onto each plasmonic lens are concentrated into a diffraction-unlimited spot as localized surface plasmons to excite the local electrons to near their vacuum levels. Meanwhile, the electrostatic micro-lens extracts the excited electrons to form a focused beamlet, which can be rastered across a wafer to perform lithography. Studies showed that surface plasmons can enhance the photoemission by orders of magnitudes. This SPEP technology can scale up the maskless lithography process to write at wafers per hour. In this talk, we will report the mechanism of the strong electron-photon couplings and the locally enhanced photoexcitation, design of a SPEP device, overview of our proof-of-concept study, and demonstrated parallel lithography of 20-50 nm features.

  2. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  3. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  4. Scalable Unix commands for parallel processors : a high-performance implementation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ong, E.; Lusk, E.; Gropp, W.

    2001-06-22

    We describe a family of MPI applications we call the Parallel Unix Commands. These commands are natural parallel versions of common Unix user commands such as ls, ps, and find, together with a few similar commands particular to the parallel environment. We describe the design and implementation of these programs and present some performance results on a 256-node Linux cluster. The Parallel Unix Commands are open source and freely available.

  5. High order parallel numerical schemes for solving incompressible flows

    NASA Technical Reports Server (NTRS)

    Lin, Avi; Milner, Edward J.; Liou, May-Fun; Belch, Richard A.

    1992-01-01

    The use of parallel computers for numerically solving flow fields has gained much importance in recent years. This paper introduces a new high order numerical scheme for computational fluid dynamics (CFD) specifically designed for parallel computational environments. A distributed MIMD system gives the flexibility of treating different elements of the governing equations with totally different numerical schemes in different regions of the flow field. The parallel decomposition of the governing operator to be solved is the primary parallel split. The primary parallel split was studied using a hypercube like architecture having clusters of shared memory processors at each node. The approach is demonstrated using examples of simple steady state incompressible flows. Future studies should investigate the secondary split because, depending on the numerical scheme that each of the processors applies and the nature of the flow in the specific subdomain, it may be possible for a processor to seek better, or higher order, schemes for its particular subcase.

  6. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  7. Parallel computing in experimental mechanics and optical measurement: A review (II)

    NASA Astrophysics Data System (ADS)

    Wang, Tianyi; Kemao, Qian

    2018-05-01

    With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.

  8. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  9. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  10. Optoelectronic associative recall using motionless-head parallel readout optical disk

    NASA Astrophysics Data System (ADS)

    Marchand, P. J.; Krishnamoorthy, A. V.; Ambs, P.; Esener, S. C.

    1990-12-01

    High data rates, low retrieval times, and simple implementation are presently shown to be obtainable by means of a motionless-head 2D parallel-readout system for optical disks. Since the optical disk obviates mechanical head motions for access, focusing, and tracking, addressing is performed exclusively through the disk's rotation. Attention is given to a high-performance associative memory system configuration which employs a parallel readout disk.

  11. Orientation and Order in Shear-Aligned Thin Films of Cylinder-Forming Block Copolymers

    NASA Astrophysics Data System (ADS)

    Register, Richard

    The regularity and tunability of the nanoscale structure in block copolymers makes their thin films attractive as nanolithographic templates; however, in the absence of a guiding field, self-assembly produces a polygrain structure with no particular orientation and a high density of defects. As demonstrated in the elegant studies of Ed Kramer and coworkers, graphoepitaxy can provide local control over domain orientation, with a dramatic reduction in defect density. Alternatively, cylindrical microdomains lying in the plane of the film can be aligned over macroscopic areas by applying shear stress at the film surface. In non-sheared films of polystyrene-poly(n-hexylmethacrylate) diblocks, PS-PHMA, the PS cylinder axis orientation relative to the surface switches from parallel to perpendicular as a function of film thickness; this oscillation is damped out as the fraction of the PS block increases, away from the sphere-cylinder phase boundary. In aligned films, thicknesses which possess the highest coverage of parallel cylinders prior to shear show the highest quality of alignment post-shear, as measured by the in-plane orientational order parameter. In well-aligned samples of optimal thickness, the quality of alignment is limited by isolated dislocations, whose density is highest at high PS contents, and by undulations in the cylinders' trajectories, whose impact is most severe at low PS contents; consequently, polymers whose compositions lie in the middle of the cylinder-forming region exhibit the highest quality of alignment. The dynamics of the alignment process are also investigated, and fit to a melting-recrystallization model which allows for the determination of two key alignment parameters: the critical stress needed for alignment, and an orientation rate constant. For films containing a monolayer of cylindrical domains, as PS weight fraction or overall molecular weight increases, the critical stress increases moderately, while the rate of alignment drastically decreases. As the number of layers of cylinders in the film increases, the critical stress decreases modestly, while the rate remains unchanged; substrate wetting condition has no measurable influence on alignment response. [Work of Raleigh Davis, in collaboration with Paul Chaikin.

  12. Efficient high-throughput biological process characterization: Definitive screening design with the ambr250 bioreactor system.

    PubMed

    Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam

    2015-01-01

    The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.

  13. Fabrications and Performance of Wireless LC Pressure Sensors through LTCC Technology.

    PubMed

    Lin, Lin; Ma, Mingsheng; Zhang, Faqiang; Liu, Feng; Liu, Zhifu; Li, Yongxiang

    2018-01-25

    This paper presents a kind of passive wireless pressure sensor comprised of a planar spiral inductor and a cavity parallel plate capacitor fabricated through low-temperature co-fired ceramic (LTCC) technology. The LTCC material with a low Young's modulus of ~65 GPa prepared by our laboratory was used to obtain high sensitivity. A three-step lamination process was applied to construct a high quality cavity structure without using any sacrificial materials. The effects of the thickness of the sensing membranes on the sensitivity and detection range of the pressure sensors were investigated. The sensor with a 148 μm sensing membrane showed the highest sensitivity of 3.76 kHz/kPa, and the sensor with a 432 μm sensing membrane presented a high detection limit of 2660 kPa. The tunable sensitivity and detection limit of the wireless pressure sensors can meet the requirements of different scenes.

  14. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  15. Increasing the reach of forensic genetics with massively parallel sequencing.

    PubMed

    Budowle, Bruce; Schmedes, Sarah E; Wendt, Frank R

    2017-09-01

    The field of forensic genetics has made great strides in the analysis of biological evidence related to criminal and civil matters. More so, the discipline has set a standard of performance and quality in the forensic sciences. The advent of massively parallel sequencing will allow the field to expand its capabilities substantially. This review describes the salient features of massively parallel sequencing and how it can impact forensic genetics. The features of this technology offer increased number and types of genetic markers that can be analyzed, higher throughput of samples, and the capability of targeting different organisms, all by one unifying methodology. While there are many applications, three are described where massively parallel sequencing will have immediate impact: molecular autopsy, microbial forensics and differentiation of monozygotic twins. The intent of this review is to expose the forensic science community to the potential enhancements that have or are soon to arrive and demonstrate the continued expansion the field of forensic genetics and its service in the investigation of legal matters.

  16. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  17. Parallel implementation of an adaptive and parameter-free N-body integrator

    NASA Astrophysics Data System (ADS)

    Pruett, C. David; Ingham, William H.; Herman, Ralph D.

    2011-05-01

    Previously, Pruett et al. (2003) [3] described an N-body integrator of arbitrarily high order M with an asymptotic operation count of O(MN). The algorithm's structure lends itself readily to data parallelization, which we document and demonstrate here in the integration of point-mass systems subject to Newtonian gravitation. High order is shown to benefit parallel efficiency. The resulting N-body integrator is robust, parameter-free, highly accurate, and adaptive in both time-step and order. Moreover, it exhibits linear speedup on distributed parallel processors, provided that each processor is assigned at least a handful of bodies. Program summaryProgram title: PNB.f90 Catalogue identifier: AEIK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3052 No. of bytes in distributed program, including test data, etc.: 68 600 Distribution format: tar.gz Programming language: Fortran 90 and OpenMPI Computer: All shared or distributed memory parallel processors Operating system: Unix/Linux Has the code been vectorized or parallelized?: The code has been parallelized but has not been explicitly vectorized. RAM: Dependent upon N Classification: 4.3, 4.12, 6.5 Nature of problem: High accuracy numerical evaluation of trajectories of N point masses each subject to Newtonian gravitation. Solution method: Parallel and adaptive extrapolation in time via power series of arbitrary degree. Running time: 5.1 s for the demo program supplied with the package.

  18. High-resolution dynamic pressure sensor array based on piezo-phototronic effect tuned photoluminescence imaging.

    PubMed

    Peng, Mingzeng; Li, Zhou; Liu, Caihong; Zheng, Qiang; Shi, Xieqing; Song, Ming; Zhang, Yang; Du, Shiyu; Zhai, Junyi; Wang, Zhong Lin

    2015-03-24

    A high-resolution dynamic tactile/pressure display is indispensable to the comprehensive perception of force/mechanical stimulations such as electronic skin, biomechanical imaging/analysis, or personalized signatures. Here, we present a dynamic pressure sensor array based on pressure/strain tuned photoluminescence imaging without the need for electricity. Each sensor is a nanopillar that consists of InGaN/GaN multiple quantum wells. Its photoluminescence intensity can be modulated dramatically and linearly by small strain (0-0.15%) owing to the piezo-phototronic effect. The sensor array has a high pixel density of 6350 dpi and exceptional small standard deviation of photoluminescence. High-quality tactile/pressure sensing distribution can be real-time recorded by parallel photoluminescence imaging without any cross-talk. The sensor array can be inexpensively fabricated over large areas by semiconductor product lines. The proposed dynamic all-optical pressure imaging with excellent resolution, high sensitivity, good uniformity, and ultrafast response time offers a suitable way for smart sensing, micro/nano-opto-electromechanical systems.

  19. Influence of crystal quality on the excitation and propagation of surface and bulk acoustic waves in polycrystalline AlN films.

    PubMed

    Clement, Marta; Olivares, Jimena; Capilla, Jose; Sangrador, Jesús; Iborra, Enrique

    2012-01-01

    We investigate the excitation and propagation of acoustic waves in polycrystalline aluminum nitride films along the directions parallel and normal to the c-axis. Longitudinal and transverse propagations are assessed through the frequency response of surface acoustic wave and bulk acoustic wave devices fabricated on films of different crystal qualities. The crystalline properties significantly affect the electromechanical coupling factors and acoustic properties of the piezoelectric layers. The presence of misoriented grains produces an overall decrease of the piezoelectric activity, degrading more severely the excitation and propagation of waves traveling transversally to the c-axis. It is suggested that the presence of such crystalline defects in c-axis-oriented films reduces the mechanical coherence between grains and hinders the transverse deformation of the film when the electric field is applied parallel to the surface. © 2012 IEEE

  20. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  1. Observing with HST V: Improvements to the Scheduling of HST Parallel Observations

    NASA Astrophysics Data System (ADS)

    Taylor, D. K.; Vanorsow, D.; Lucks, M.; Henry, R.; Ratnatunga, K.; Patterson, A.

    1994-12-01

    Recent improvements to the Hubble Space Telescope (HST) ground system have significantly increased the frequency of pure parallel observations, i.e. the simultaneous use of multiple HST instruments by different observers. Opportunities for parallel observations are limited by a variety of timing, hardware, and scientific constraints. Formerly, such opportunities were heuristically predicted prior to the construction of the primary schedule (or calendar), and lack of complete information resulted in high rates of scheduling failures and missed opportunities. In the current process the search for parallel opportunities is delayed until the primary schedule is complete, at which point new software tools are employed to identify places where parallel observations are supported. The result has been a considerable increase in parallel throughput. A new technique, known as ``parallel crafting,'' is currently under development to streamline further the parallel scheduling process. This radically new method will replace the standard exposure logsheet with a set of abstract rules from which observation parameters will be constructed ``on the fly'' to best match the constraints of the parallel opportunity. Currently, parallel observers must specify a huge (and highly redundant) set of exposure types in order to cover all possible types of parallel opportunities. Crafting rules permit the observer to express timing, filter, and splitting preferences in a far more succinct manner. The issue of coordinated parallel observations (same PI using different instruments simultaneously), long a troublesome aspect of the ground system, is also being addressed. For Cycle 5, the Phase II Proposal Instructions now have an exposure-level PAR WITH special requirement. While only the primary's alignment will be scheduled on the calendar, new commanding will provide for parallel exposures with both instruments.

  2. The FORCE - A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    This paper explains why the FORCE parallel programming language is easily portable among six different shared-memory multiprocessors, and how a two-level macro preprocessor makes it possible to hide low-level machine dependencies and to build machine-independent high-level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared-memory multiprocessor executing them.

  3. The FORCE: A highly portable parallel programming language

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Alaghband, Gita; Jakob, Ruediger

    1989-01-01

    Here, it is explained why the FORCE parallel programming language is easily portable among six different shared-memory microprocessors, and how a two-level macro preprocessor makes it possible to hide low level machine dependencies and to build machine-independent high level constructs on top of them. These FORCE constructs make it possible to write portable parallel programs largely independent of the number of processes and the specific shared memory multiprocessor executing them.

  4. CrossTalk: The Journal of Defense Software Engineering. Volume 19, Number 6

    DTIC Science & Technology

    2006-06-01

    improvement methods. The total volume of projects studied now exceeds 12,000. Software Productivity Research, LLC Phone: (877) 570-5459 (973) 273-5829...While performing quality con- sulting, Olson has helped organizations measurably improve quality and productivity , save millions of dollars in costs of...This article draws parallels between the outrageous events on the Jerry Springer Show and problems faced by process improvement programs. by Paul

  5. Parallels between Objective Indicators and Subjective Perceptions of Quality of Life: A Study of Metropolitan and County Areas in Taiwan

    ERIC Educational Resources Information Center

    Liao, Pei-shan

    2009-01-01

    This study explores the consistency between objective indicators and subjective perceptions of quality of life in a ranking of survey data for cities and counties in Taiwan. Data used for analysis included the Statistical Yearbook of Hsiens and Municipalities and the Survey on Living Conditions of Citizens in Taiwan, both given for the year 2000.…

  6. RRAM-based parallel computing architecture using k-nearest neighbor classification for pattern recognition

    NASA Astrophysics Data System (ADS)

    Jiang, Yuning; Kang, Jinfeng; Wang, Xinan

    2017-03-01

    Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.

  7. On the costs of parallel processing in dual-task performance: The case of lexical processing in word production.

    PubMed

    Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D

    2015-12-01

    Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).

  8. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    PubMed

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  9. A mirror for lab-based quasi-monochromatic parallel x-rays

    NASA Astrophysics Data System (ADS)

    Nguyen, Thanhhai; Lu, Xun; Lee, Chang Jun; Jung, Jin-Ho; Jin, Gye-Hwan; Kim, Sung Youb; Jeon, Insu

    2014-09-01

    A multilayered parabolic mirror with six W/Al bilayers was designed and fabricated to generate monochromatic parallel x-rays using a lab-based x-ray source. Using this mirror, curved bright bands were obtained in x-ray images as reflected x-rays. The parallelism of the reflected x-rays was investigated using the shape of the bands. The intensity and monochromatic characteristics of the reflected x-rays were evaluated through measurements of the x-ray spectra in the band. High intensity, nearly monochromatic, and parallel x-rays, which can be used for high resolution x-ray microscopes and local radiation therapy systems, were obtained.

  10. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. High-resolution whole-brain diffusion MRI at 7T using radiofrequency parallel transmission.

    PubMed

    Wu, Xiaoping; Auerbach, Edward J; Vu, An T; Moeller, Steen; Lenglet, Christophe; Schmitter, Sebastian; Van de Moortele, Pierre-François; Yacoub, Essa; Uğurbil, Kâmil

    2018-03-30

    Investigating the utility of RF parallel transmission (pTx) for Human Connectome Project (HCP)-style whole-brain diffusion MRI (dMRI) data at 7 Tesla (7T). Healthy subjects were scanned in pTx and single-transmit (1Tx) modes. Multiband (MB), single-spoke pTx pulses were designed to image sagittal slices. HCP-style dMRI data (i.e., 1.05-mm resolutions, MB2, b-values = 1000/2000 s/mm 2 , 286 images and 40-min scan) and data with higher accelerations (MB3 and MB4) were acquired with pTx. pTx significantly improved flip-angle detected signal uniformity across the brain, yielding ∼19% increase in temporal SNR (tSNR) averaged over the brain relative to 1Tx. This allowed significantly enhanced estimation of multiple fiber orientations (with ∼21% decrease in dispersion) in HCP-style 7T dMRI datasets. Additionally, pTx pulses achieved substantially lower power deposition, permitting higher accelerations, enabling collection of the same data in 2/3 and 1/2 the scan time or of more data in the same scan time. pTx provides a solution to two major limitations for slice-accelerated high-resolution whole-brain dMRI at 7T; it improves flip-angle uniformity, and enables higher slice acceleration relative to current state-of-the-art. As such, pTx provides significant advantages for rapid acquisition of high-quality, high-resolution truly whole-brain dMRI data. © 2018 International Society for Magnetic Resonance in Medicine.

  12. Fast generation of computer-generated hologram by graphics processing unit

    NASA Astrophysics Data System (ADS)

    Matsuda, Sho; Fujii, Tomohiko; Yamaguchi, Takeshi; Yoshikawa, Hiroshi

    2009-02-01

    A cylindrical hologram is well known to be viewable in 360 deg. This hologram depends high pixel resolution.Therefore, Computer-Generated Cylindrical Hologram (CGCH) requires huge calculation amount.In our previous research, we used look-up table method for fast calculation with Intel Pentium4 2.8 GHz.It took 480 hours to calculate high resolution CGCH (504,000 x 63,000 pixels and the average number of object points are 27,000).To improve quality of CGCH reconstructed image, fringe pattern requires higher spatial frequency and resolution.Therefore, to increase the calculation speed, we have to change the calculation method. In this paper, to reduce the calculation time of CGCH (912,000 x 108,000 pixels), we employ Graphics Processing Unit (GPU).It took 4,406 hours to calculate high resolution CGCH on Xeon 3.4 GHz.Since GPU has many streaming processors and a parallel processing structure, GPU works as the high performance parallel processor.In addition, GPU gives max performance to 2 dimensional data and streaming data.Recently, GPU can be utilized for the general purpose (GPGPU).For example, NVIDIA's GeForce7 series became a programmable processor with Cg programming language.Next GeForce8 series have CUDA as software development kit made by NVIDIA.Theoretically, calculation ability of GPU is announced as 500 GFLOPS. From the experimental result, we have achieved that 47 times faster calculation compared with our previous work which used CPU.Therefore, CGCH can be generated in 95 hours.So, total time is 110 hours to calculate and print the CGCH.

  13. Spatial data analytics on heterogeneous multi- and many-core parallel architectures using python

    USGS Publications Warehouse

    Laura, Jason R.; Rey, Sergio J.

    2017-01-01

    Parallel vector spatial analysis concerns the application of parallel computational methods to facilitate vector-based spatial analysis. The history of parallel computation in spatial analysis is reviewed, and this work is placed into the broader context of high-performance computing (HPC) and parallelization research. The rise of cyber infrastructure and its manifestation in spatial analysis as CyberGIScience is seen as a main driver of renewed interest in parallel computation in the spatial sciences. Key problems in spatial analysis that have been the focus of parallel computing are covered. Chief among these are spatial optimization problems, computational geometric problems including polygonization and spatial contiguity detection, the use of Monte Carlo Markov chain simulation in spatial statistics, and parallel implementations of spatial econometric methods. Future directions for research on parallelization in computational spatial analysis are outlined.

  14. The benefits of privatization.

    PubMed

    Dirnfeld, V

    1996-08-15

    The promise of a universal, comprehensive, publicly funded system of medical care that was the foundation of the Medical Care Act passed in 1966 is no longer possible. Massive government debt, increasing health care costs, a growing and aging population and advances in technology have challenged the system, which can no longer meet the expectations of the public or of the health care professions. A parallel, private system, funded by a not-for-profit, regulated system of insurance coverage affordable for all wage-earners, would relieve the overstressed public system without decreasing the quality of care in that system. Critics of a parallel, private system, who base their arguments on the politics of fear and envy, charge that such a private system would "Americanize" Canadian health care and that the wealthy would be able to buy better, faster care than the rest of the population. But this has not happened in the parallel public and private health care systems in other Western countries or in the public and private education system in Canada. Wealthy Canadians can already buy medical care in the United States, where they spend $1 billion each year, an amount that represents a loss to Canada of 10,000 health care jobs. Parallel-system schemes in other countries have proven that people are driven to a private system by dissatisfaction with the quality of service, which is already suffering in Canada. Denial of choice is unacceptable to many people, particularly since the terms and conditions under which Canadians originally decided to forgo choice in medical care no longer apply.

  15. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  16. Effect of high-pitch dual-source CT to compensate motion artifacts: a phantom study.

    PubMed

    Farshad-Amacker, Nadja A; Alkadhi, Hatem; Leschka, Sebastian; Frauenfelder, Thomas

    2013-10-01

    To evaluate the potential of high-pitch, dual-source computed tomography (DSCT) for compensation of motion artifacts. Motion artifacts were created using a moving chest/cardiac phantom with integrated stents at different velocities (from 0 to 4-6 cm/s) parallel (z direction), transverse (x direction), and diagonal (x and z direction combined) to the scanning direction using standard-pitch (SP) (pitch = 1) and high-pitch (HP) (pitch = 3.2) 128-detector DSCT (Siemens, Healthcare, Forchheim, Germany). The scanning parameters were (SP/HP): tube voltage, 120 kV/120 kV; effective tube current time product, 300 mAs/500 mAs; and a pitch of 1/3.2. Motion artifacts were analyzed in terms of subjective image quality and object distortion. Image quality was rated by two blinded, independent observers using a 4-point scoring system (1, excellent; 2, good with minor object distortion or blurring; 3, diagnostically partially not acceptable; and 4, diagnostically not acceptable image quality). Object distortion was assessed by the measured changes of the object's outer diameter (x) and length (z) and a corresponding calculated distortion vector (d) (d = √(x(2) + z(2))). The interobserver agreement was excellent (k = 0.91). Image quality using SP was diagnostically not acceptable with any motion in x direction (scores 3 and 4), in contrast to HP DSCT where it remained diagnostic up to 2 cm/s (scores 1 and 2). For motion in the z direction only, image quality remained diagnostic for SP and HP DSCT (scores 1 and 2). Changes of the object's diameter (x), length (z), and distortion vectors (d) were significantly greater with SP (overall: x = 1.9 cm ± 1.7 cm, z = 0.6 cm ± 0.8 cm, and d = 1.4 cm ± 1.5 cm) compared to HP DSCT (overall: x = 0.1 cm ± 0.1 cm, z = 0.0 cm ± 0.1 cm, and d = 0.1 cm ± 0.1 cm; each P < .05). High-pitch DSCT significantly decreases motion artifacts in various directions and improves image quality. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  17. A Bayesian framework for extracting human gait using strong prior knowledge.

    PubMed

    Zhou, Ziheng; Prügel-Bennett, Adam; Damper, Robert I

    2006-11-01

    Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap" to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on") sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm.

  18. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  19. Aberration compensation of an ultrasound imaging instrument with a reduced number of channels.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey P; Waag, Robert C

    2012-10-01

    Focusing and imaging qualities of an ultrasound imaging system that uses aberration correction were experimentally investigated as functions of the number of parallel channels. Front-end electronics that consolidate signals from multiple physical elements can be used to lower hardware and computational costs by reducing the number of parallel channels. However, the signals from sparse arrays of synthetic elements yield poorer aberration estimates. In this study, aberration estimates derived from synthetic arrays of varying element sizes are evaluated by comparing compensated receive focuses, compensated transmit focuses, and compensated b-scan images of a point target and a cyst phantom. An array of 80 x 80 physical elements with a pitch of 0.6 x 0.6 mm was used for all of the experiments and the aberration was produced by a phantom selected to mimic propagation through abdominal wall. The results show that aberration correction derived from synthetic arrays with pitches that have a diagonal length smaller than 70% of the correlation length of the aberration yield focuses and images of approximately the same quality. This connection between correlation length of the aberration and synthetic element size provides a guideline for determining the number of parallel channels that are required when designing imaging systems that employ aberration correction.

  20. Selective visual attention to emotional words: Early parallel frontal and visual activations followed by interactive effects in visual cortex.

    PubMed

    Schindler, Sebastian; Kissler, Johanna

    2016-10-01

    Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-05-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  2. A parallelized three-dimensional cellular automaton model for grain growth during additive manufacturing

    NASA Astrophysics Data System (ADS)

    Lian, Yanping; Lin, Stephen; Yan, Wentao; Liu, Wing Kam; Wagner, Gregory J.

    2018-01-01

    In this paper, a parallelized 3D cellular automaton computational model is developed to predict grain morphology for solidification of metal during the additive manufacturing process. Solidification phenomena are characterized by highly localized events, such as the nucleation and growth of multiple grains. As a result, parallelization requires careful treatment of load balancing between processors as well as interprocess communication in order to maintain a high parallel efficiency. We give a detailed summary of the formulation of the model, as well as a description of the communication strategies implemented to ensure parallel efficiency. Scaling tests on a representative problem with about half a billion cells demonstrate parallel efficiency of more than 80% on 8 processors and around 50% on 64; loss of efficiency is attributable to load imbalance due to near-surface grain nucleation in this test problem. The model is further demonstrated through an additive manufacturing simulation with resulting grain structures showing reasonable agreement with those observed in experiments.

  3. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  4. SOPanG: online text searching over a pan-genome.

    PubMed

    Cislak, Aleksander; Grabowski, Szymon; Holub, Jan

    2018-06-22

    The many thousands of high-quality genomes available nowadays imply a shift from single genome to pan-genomic analyses. A basic algorithmic building brick for such a scenario is online search over a collection of similar texts, a problem with surprisingly few solutions presented so far. We present SOPanG, a simple tool for exact pattern matching over an elastic-degenerate string, a recently proposed simplified model for the pan-genome. Thanks to bit-parallelism, it achieves pattern matching speeds above 400MB/s, more than an order of magnitude higher than of other software. SOPanG is available for free from: https://github.com/MrAlexSee/sopang. Supplementary data are available at Bioinformatics online.

  5. "First generation" automated DNA sequencing technology.

    PubMed

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.

  6. Comprehensive care plus creative architecture.

    PubMed

    Easter, James G

    2005-01-01

    The delivery of high-quality, comprehensive cancer care and the treatment environment go hand in hand with the patient's recovery. When the planning and design of a comprehensive cancer care program runs parallel to the operational expectations and functional standards, the building users (patients, staff, and physicians) benefit significantly. This behavioral response requires a sensitive interface during the campus master planning, architectural programming, and design phases. Each building component and user functioning along the "continuum of care" will have different expectations, programmatic needs, and design responses. This article addresses the community- and hospital-based elements of this continuum. The environment does affect the patient care and the care-giving team members. It may be a positive or, unfortunately, a negative response.

  7. Precipitation-chemistry measurements from the California Acid Deposition Monitoring Program, 1985-1990

    USGS Publications Warehouse

    Blanchard, Charles L.; Tonnessen, Kathy A.

    1993-01-01

    The configuration of the California Acid Deposition Monitoring Program (CADMP) precipitation network is described and quality assurance results summarized. Comparison of CADMP and the National Acid Deposition Program/National Trends Network (NADP/NTN) data at four parallel sites indicated that mean depth-weighted differences were less than 3 μeq ℓ−1 for all ions, being statistically significant for ammonium, sulfate and hydrogen ion. These apparently small differences were 15–30% of the mean concentrations of ammonium, sulfate and hydrogen ion. Mean depth-weighted concentrations and mass deposition rates for the period 1985–1990 are summarized; the latter were highest either where concentrations or precipitation depths were relatively high.

  8. Fast and efficient molecule detection in localization-based super-resolution microscopy by parallel adaptive histogram equalization.

    PubMed

    Li, Yiming; Ishitsuka, Yuji; Hedde, Per Niklas; Nienhaus, G Ulrich

    2013-06-25

    In localization-based super-resolution microscopy, individual fluorescent markers are stochastically photoactivated and subsequently localized within a series of camera frames, yielding a final image with a resolution far beyond the diffraction limit. Yet, before localization can be performed, the subregions within the frames where the individual molecules are present have to be identified-oftentimes in the presence of high background. In this work, we address the importance of reliable molecule identification for the quality of the final reconstructed super-resolution image. We present a fast and robust algorithm (a-livePALM) that vastly improves the molecule detection efficiency while minimizing false assignments that can lead to image artifacts.

  9. Contemporary considerations in concurrent endoscopic sinus surgery and rhinoplasty.

    PubMed

    Steele, Toby O; Gill, Amarbir; Tollefson, Travis T

    2018-06-11

    Characterize indications, perioperative considerations, clinical outcomes and complications for concurrent endoscopic sinus surgery (ESS) and rhinoplasty. Chronic rhinosinusitis and septal deviation with or without inferior turbinate hypertrophy independently impair patient-reported quality of life. Guidelines implore surgeons to include endoscopy to accurately evaluate patient symptoms. Complication rates parallel those of either surgery (ESS and rhinoplasty) alone and are not increased when performed concurrently. Operative time is generally longer for joint surgeries. Patient satisfaction rates are high. Concurrent functional and/or cosmetic rhinoplasty and ESS is a safe endeavor to perform in a single operative setting and most outcomes data suggest excellent patient outcomes. Additional studies that include patient-reported outcome measures are needed.

  10. Characterization of a parallel beam CCD optical-CT apparatus for 3D radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Krstajić, Nikola; Doran, Simon J.

    2006-12-01

    This paper describes the initial steps we have taken in establishing CCD based optical-CT as a viable alternative for 3-D radiation dosimetry. First, we compare the optical density (OD) measurements from a high quality test target and variable neutral density filter (VNDF). A modulation transfer function (MTF) of individual projections is derived for three positions of the sinusoidal test target within the scanning tank. Our CCD is then characterized in terms of its signal-to-noise ratio (SNR). Finally, a sample reconstruction of a scan of a PRESAGETM (registered trademark of Heuris Pharma, NJ, Skillman, USA.) dosimeter is given, demonstrating the capabilities of the apparatus.

  11. Rapid anatomical brain imaging using spiral acquisition and an expanded signal model.

    PubMed

    Kasper, Lars; Engel, Maria; Barmet, Christoph; Haeberlin, Maximilian; Wilm, Bertram J; Dietrich, Benjamin E; Schmid, Thomas; Gross, Simon; Brunner, David O; Stephan, Klaas E; Pruessmann, Klaas P

    2018-03-01

    We report the deployment of spiral acquisition for high-resolution structural imaging at 7T. Long spiral readouts are rendered manageable by an expanded signal model including static off-resonance and B 0 dynamics along with k-space trajectories and coil sensitivity maps. Image reconstruction is accomplished by inversion of the signal model using an extension of the iterative non-Cartesian SENSE algorithm. Spiral readouts up to 25 ms are shown to permit whole-brain 2D imaging at 0.5 mm in-plane resolution in less than a minute. A range of options is explored, including proton-density and T 2 * contrast, acceleration by parallel imaging, different readout orientations, and the extraction of phase images. Results are shown to exhibit competitive image quality along with high geometric consistency. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. An efficient, reliable and inexpensive device for the rapid homogenization of multiple tissue samples by centrifugation.

    PubMed

    Ilyin, S E; Plata-Salamán, C R

    2000-02-15

    Homogenization of tissue samples is a common first step in the majority of current protocols for RNA, DNA, and protein isolation. This report describes a simple device for centrifugation-mediated homogenization of tissue samples. The method presented is applicable to RNA, DNA, and protein isolation, and we show examples where high quality total cell RNA, DNA, and protein were obtained from brain and other tissue samples. The advantages of the approach presented include: (1) a significant reduction in time investment relative to hand-driven or individual motorized-driven pestle homogenization; (2) easy construction of the device from inexpensive parts available in any laboratory; (3) high replicability in the processing; and (4) the capacity for the parallel processing of multiple tissue samples, thus allowing higher efficiency, reliability, and standardization.

  13. Note: A rigid piezo motor with large output force and an effective method to reduce sliding friction force.

    PubMed

    Guo, Ying; Hou, Yubin; Lu, Qingyou

    2014-05-01

    We present a completely practical TunaDrive piezo motor. It consists of a central piezo stack sandwiched by two arm piezo stacks and two leg piezo stacks, respectively, which is then sandwiched and spring-clamped by a pair of parallel polished sapphire rods. It works by alternatively fast expanding and contracting the arm/leg stacks while slowly expanding/contracting the central stack simultaneously. The key point is that sufficiently fast expanding and contracting a limb stack can make its two sliding friction forces well cancel, resulting in the total sliding friction force is <10% of the total static friction force, which can help increase output force greatly. The piezo motor's high compactness, precision, and output force make it perfect in building a high-quality harsh-condition (vibration resistant) atomic resolution scanning probe microscope.

  14. HTS Fabry-Perot resonators for the far infrared

    NASA Astrophysics Data System (ADS)

    Keller, Philipp; Prenninger, Martin; Pechen, Evgeny V.; Renk, Karl F.

    1996-06-01

    We report on far infrared (FIR) Fabry-Perot resonators (FPR) with high temperature superconductor (HTS) thin films as mirrors. For the fabrication of FPR we use two parallel MgO plates covered with YBa2Cu3O7-delta thin films on adjacent sides. We have measured the far-infrared transmissivity at 10 K with a Fourier transform infrared spectrometer. Very sharp resonances can be observed for frequencies below 6 THz where the MgO is transparent. The finesse (width of the first order resonance) is comparable to the FPR with metallic meshes as reflectors that are applied in the FIR spectroscopy and astronomy. We have also shown that thin films of gold are not adequate substitute to HTS thin films and not suitable for the fabrication of high-quality FPR due to the ohmic losses.

  15. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency issues in the GA, it is possible to have idle processors. However, as long as the load at each processing node is similar, the processors are kept busy nearly all of the time. In applying GAs to circuit design, a suitable genetic representation 'is that of a circuit-construction program. We discuss one such circuit-construction programming language and show how evolution can generate useful analog circuit designs. This language has the desirable property that virtually all sets of combinations of primitives result in valid circuit graphs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. Using a parallel genetic algorithm and circuit simulation software, we present experimental results as applied to three analog filter and two amplifier design tasks. For example, a figure shows an 85 dB amplifier design evolved by our system, and another figure shows the performance of that circuit (gain and frequency response). In all tasks, our system is able to generate circuits that achieve the target specifications.

  16. The HST Frontier Fields: Complete High-Level Science Data Products for All 6 Clusters

    NASA Astrophysics Data System (ADS)

    Koekemoer, Anton M.; Mack, Jennifer; Lotz, Jennifer M.; Borncamp, David; Khandrika, Harish G.; Lucas, Ray A.; Martlin, Catherine; Porterfield, Blair; Sunnquist, Ben; Anderson, Jay; Avila, Roberto J.; Barker, Elizabeth A.; Grogin, Norman A.; Gunning, Heather C.; Hilbert, Bryan; Ogaz, Sara; Robberto, Massimo; Sembach, Kenneth; Flanagan, Kathryn; Mountain, Matt; HST Frontier Fields Team

    2017-01-01

    The Hubble Space Telescope Frontier Fields program (PI: J. Lotz) is a large Director's Discretionary program of 840 orbits, to obtain ultra-deep observations of six strong lensing clusters of galaxies, together with parallel deep blank fields, making use of the strong lensing amplification by these clusters of distant background galaxies to detect the faintest galaxies currently observable in the high-redshift universe. The entire program has now completed successfully for all 6 clusters, namely Abell 2744, Abell S1063, Abell 370, MACS J0416.1-2403, MACS J0717.5+3745 and MACS J1149.5+2223,. Each of these was observed over two epochs, to a total depth of 140 orbits on the main cluster and an associated parallel field, obtaining images in ACS (F435W, F606W, F814W) and WFC3/IR (F105W, F125W, F140W, F160W) on both the main cluster and the parallel field in all cases. Full sets of high-level science products have been generated for all these clusters by the team at STScI, including cumulative-depth data releases during each epoch, as well as full-depth releases after the completion of each epoch. These products include all the full-depth distortion-corrected drizzled mosaics and associated products for each cluster, which are science-ready to facilitate the construction of lensing models as well as enabling a wide range of other science projects. Many improvements beyond default calibration for ACS and WFC3/IR are implemented in these data products, including corrections for persistence, time-variable sky, and low-level dark current residuals, as well as improvements in astrometric alignment to achieve milliarcsecond-level accuracy. The full set of resulting high-level science products and mosaics are publicly delivered to the community via the Mikulski Archive for Space Telescopes (MAST) to enable the widest scientific use of these data, as well as ensuring a public legacy dataset of the highest possible quality that is of lasting value to the entire community.

  17. The HST Frontier Fields: Complete Observations and High-Level Science Data Products for All 6 Clusters

    NASA Astrophysics Data System (ADS)

    Koekemoer, Anton M.; Mack, Jennifer; Lotz, Jennifer M.; Borncamp, David; Khandrika, Harish G.; Lucas, Ray A.; Martlin, Catherine; Martlin, Catherine; Porterfield, Blair; Sunnquist, Ben; Anderson, Jay; Avila, Roberto J.; Barker, Elizabeth A.; Grogin, Norman A.; Gunning, Heather C.; Hilbert, Bryan; Ogaz, Sara; Robberto, Massimo; Sembach, Kenneth; Flanagan, Kathryn; Mountain, Matt; HST Frontier Fields Team

    2017-06-01

    The Hubble Space Telescope Frontier Fields program is a large Director's Discretionary program of 840 orbits, to obtain ultra-deep observations of six strong lensing clusters of galaxies, together with parallel deep blank fields, making use of the strong lensing amplification by these clusters of distant background galaxies to detect the faintest galaxies currently observable in the high-redshift universe. The entire program has now completed successfully for all 6 clusters, namely Abell 2744, Abell S1063, Abell 370, MACS J0416.1-2403, MACS J0717.5+3745 and MACS J1149.5+2223,. Each of these was observed over two epochs, to a total depth of 140 orbits on the main cluster and an associated parallel field, obtaining images in ACS (F435W, F606W, F814W) and WFC3/IR (F105W, F125W, F140W, F160W) on both the main cluster and the parallel field in all cases. Full sets of high-level science products have been generated for all these clusters by the team at STScI, including cumulative-depth data releases during each epoch, as well as full-depth releases after the completion of each epoch. These products include all the full-depth distortion-corrected drizzled mosaics and associated products for each cluster, which are science-ready to facilitate the construction of lensing models as well as enabling a wide range of other science projects. Many improvements beyond default calibration for ACS and WFC3/IR are implemented in these data products, including corrections for persistence, time-variable sky, and low-level dark current residuals, as well as improvements in astrometric alignment to achieve milliarcsecond-level accuracy. The full set of resulting high-level science products and mosaics are publicly delivered to the community via the Mikulski Archive for Space Telescopes (MAST) to enable the widest scientific use of these data, as well as ensuring a public legacy dataset of the highest possible quality that is of lasting value to the entire community.

  18. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.)

    PubMed Central

    2012-01-01

    Background High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits utility in species with low rates of polymorphism such as lettuce (Lactuca sativa). Results We developed a 6.5 million feature Affymetrix GeneChip® for efficient polymorphism discovery and genotyping, as well as for analysis of gene expression in lettuce. Probes on the microarray were designed from 26,809 unigenes from cultivated lettuce and an additional 8,819 unigenes from four related species (L. serriola, L. saligna, L. virosa and L. perennis). Where possible, probes were tiled with a 2 bp stagger, alternating on each DNA strand; providing an average of 187 probes covering approximately 600 bp for each of over 35,000 unigenes; resulting in up to 13 fold redundancy in coverage per nucleotide. We developed protocols for hybridization of genomic DNA to the GeneChip® and refined custom algorithms that utilized coverage from multiple, high quality probes to detect single position polymorphisms in 2 bp sliding windows across each unigene. This allowed us to detect greater than 18,000 polymorphisms between the parental lines of our core mapping population, as well as numerous polymorphisms between cultivated lettuce and wild species in the lettuce genepool. Using marker data from our diversity panel comprised of 52 accessions from the five species listed above, we were able to separate accessions by species using both phylogenetic and principal component analyses. Additionally, we estimated the diversity between different types of cultivated lettuce and distinguished morphological types. Conclusion By hybridizing genomic DNA to a custom oligonucleotide array designed for maximum gene coverage, we were able to identify polymorphisms using two approaches for pair-wise comparisons, as well as a highly parallel method that compared all 52 genotypes simultaneously. PMID:22583801

  19. Development and application of a 6.5 million feature Affymetrix Genechip® for massively parallel discovery of single position polymorphisms in lettuce (Lactuca spp.).

    PubMed

    Stoffel, Kevin; van Leeuwen, Hans; Kozik, Alexander; Caldwell, David; Ashrafi, Hamid; Cui, Xinping; Tan, Xiaoping; Hill, Theresa; Reyes-Chin-Wo, Sebastian; Truco, Maria-Jose; Michelmore, Richard W; Van Deynze, Allen

    2012-05-14

    High-resolution genetic maps are needed in many crops to help characterize the genetic diversity that determines agriculturally important traits. Hybridization to microarrays to detect single feature polymorphisms is a powerful technique for marker discovery and genotyping because of its highly parallel nature. However, microarrays designed for gene expression analysis rarely provide sufficient gene coverage for optimal detection of nucleotide polymorphisms, which limits utility in species with low rates of polymorphism such as lettuce (Lactuca sativa). We developed a 6.5 million feature Affymetrix GeneChip® for efficient polymorphism discovery and genotyping, as well as for analysis of gene expression in lettuce. Probes on the microarray were designed from 26,809 unigenes from cultivated lettuce and an additional 8,819 unigenes from four related species (L. serriola, L. saligna, L. virosa and L. perennis). Where possible, probes were tiled with a 2 bp stagger, alternating on each DNA strand; providing an average of 187 probes covering approximately 600 bp for each of over 35,000 unigenes; resulting in up to 13 fold redundancy in coverage per nucleotide. We developed protocols for hybridization of genomic DNA to the GeneChip® and refined custom algorithms that utilized coverage from multiple, high quality probes to detect single position polymorphisms in 2 bp sliding windows across each unigene. This allowed us to detect greater than 18,000 polymorphisms between the parental lines of our core mapping population, as well as numerous polymorphisms between cultivated lettuce and wild species in the lettuce genepool. Using marker data from our diversity panel comprised of 52 accessions from the five species listed above, we were able to separate accessions by species using both phylogenetic and principal component analyses. Additionally, we estimated the diversity between different types of cultivated lettuce and distinguished morphological types. By hybridizing genomic DNA to a custom oligonucleotide array designed for maximum gene coverage, we were able to identify polymorphisms using two approaches for pair-wise comparisons, as well as a highly parallel method that compared all 52 genotypes simultaneously.

  20. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    NASA Technical Reports Server (NTRS)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  1. Flexure mechanism-based parallelism measurements for chip-on-glass bonding

    NASA Astrophysics Data System (ADS)

    Jung, Seung Won; Yun, Won Soo; Jin, Songwan; Kim, Bo Sun; Jeong, Young Hun

    2011-08-01

    Recently, liquid crystal displays (LCDs) have played vital roles in a variety of electronic devices such as televisions, cellular phones, and desktop/laptop monitors because of their enhanced volume, performance, and functionality. However, there is still a need for thinner LCD panels due to the trend of miniaturization in electronic applications. Thus, chip-on-glass (COG) bonding has become one of the most important aspects in the LCD panel manufacturing process. In this study, a novel sensor was developed to measure the parallelism between the tooltip planes of the bonding head and the backup of the COG main bonder, which has previously been estimated by prescale pressure films in industry. The sensor developed in this study is based on a flexure mechanism, and it can measure the total pressing force and the inclination angles in two directions that satisfy the quantitative definition of parallelism. To improve the measurement accuracy, the sensor was calibrated based on the estimation of the total pressing force and the inclination angles using the least-squares method. To verify the accuracy of the sensor, the estimation results for parallelism were compared with those from prescale pressure film measurements. In addition, the influence of parallelism on the bonding quality was experimentally demonstrated. The sensor was successfully applied to the measurement of parallelism in the COG-bonding process with an accuracy of more than three times that of the conventional method using prescale pressure films.

  2. A Two-dimensional Sixteen Channel Transmit/Receive Coil Array for Cardiac MRI at 7.0 Tesla: Design, Evaluation and Application

    PubMed Central

    Thalhammer, Christof; Renz, Wolfgang; Winter, Lukas; Hezel, Fabian; Rieger, Jan; Pfeiffer, Harald; Graessl, Andreas; Seifert, Frank; Hoffmann, Werner; von Knobelsdorff-Brenkenhoff, Florian; Tkachenko, Valeriy; Schulz-Menger, Jeanette; Kellman, Peter; Niendorf, Thoralf

    2012-01-01

    Purpose To design, evaluate and apply a two-dimensional 16 channel transmit/receive coil array tailored for cardiac MRI at 7.0 Tesla. Material and Methods The cardiac coil array consists of 2 sections each using 8 elements arranged in a 2 × 4 array. RF safety was validated by SAR simulations. Cardiac imaging was performed using 2D CINE FLASH imaging, T2* mapping and fat-water separation imaging. The characteristics of the coil array were analyzed including parallel imaging performance, left ventricular chamber quantification and overall image quality. Results RF characteristics were found to be appropriate for all subjects included in the study. The SAR values derived from the simulations fall well in the limits of legal guidelines. The baseline SNR advantage at 7.0 T was put to use to acquire 2D CINE images of the heart with a very high spatial resolution of (1 × 1 × 4) mm3. The proposed coil array supports 1D acceleration factors of up to R=4 without impairing image quality significantly. Conclusions The 16 channel TX/RX coil has the capability to acquire high contrast and high spatial resolution images of the heart at 7.0 Tesla. PMID:22706727

  3. Database-Centric Method for Automated High-Throughput Deconvolution and Analysis of Kinetic Antibody Screening Data.

    PubMed

    Nobrega, R Paul; Brown, Michael; Williams, Cody; Sumner, Chris; Estep, Patricia; Caffry, Isabelle; Yu, Yao; Lynaugh, Heather; Burnina, Irina; Lilov, Asparouh; Desroches, Jordan; Bukowski, John; Sun, Tingwan; Belk, Jonathan P; Johnson, Kirt; Xu, Yingda

    2017-10-01

    The state-of-the-art industrial drug discovery approach is the empirical interrogation of a library of drug candidates against a target molecule. The advantage of high-throughput kinetic measurements over equilibrium assessments is the ability to measure each of the kinetic components of binding affinity. Although high-throughput capabilities have improved with advances in instrument hardware, three bottlenecks in data processing remain: (1) intrinsic molecular properties that lead to poor biophysical quality in vitro are not accounted for in commercially available analysis models, (2) processing data through a user interface is time-consuming and not amenable to parallelized data collection, and (3) a commercial solution that includes historical kinetic data in the analysis of kinetic competition data does not exist. Herein, we describe a generally applicable method for the automated analysis, storage, and retrieval of kinetic binding data. This analysis can deconvolve poor quality data on-the-fly and store and organize historical data in a queryable format for use in future analyses. Such database-centric strategies afford greater insight into the molecular mechanisms of kinetic competition, allowing for the rapid identification of allosteric effectors and the presentation of kinetic competition data in absolute terms of percent bound to antigen on the biosensor.

  4. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE)

    PubMed Central

    Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram

    2010-01-01

    MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794

  5. Synthesis of high-quality libraries of long (150mer) oligonucleotides by a novel depurination controlled process

    PubMed Central

    LeProust, Emily M.; Peck, Bill J.; Spirin, Konstantin; McCuen, Heather Brummel; Moore, Bridget; Namsaraev, Eugeni; Caruthers, Marvin H.

    2010-01-01

    We have achieved the ability to synthesize thousands of unique, long oligonucleotides (150mers) in fmol amounts using parallel synthesis of DNA on microarrays. The sequence accuracy of the oligonucleotides in such large-scale syntheses has been limited by the yields and side reactions of the DNA synthesis process used. While there has been significant demand for libraries of long oligos (150mer and more), the yields in conventional DNA synthesis and the associated side reactions have previously limited the availability of oligonucleotide pools to lengths <100 nt. Using novel array based depurination assays, we show that the depurination side reaction is the limiting factor for the synthesis of libraries of long oligonucleotides on Agilent Technologies’ SurePrint® DNA microarray platform. We also demonstrate how depurination can be controlled and reduced by a novel detritylation process to enable the synthesis of high quality, long (150mer) oligonucleotide libraries and we report the characterization of synthesis efficiency for such libraries. Oligonucleotide libraries prepared with this method have changed the economics and availability of several existing applications (e.g. targeted resequencing, preparation of shRNA libraries, site-directed mutagenesis), and have the potential to enable even more novel applications (e.g. high-complexity synthetic biology). PMID:20308161

  6. Design of a thin-plate based tunable high-quality narrow passband filter for elastic transverse waves propagate in metals

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L. H.; Hu, C. L.; Yan, W. S.; Pennec, Yan; Hu, N.

    2018-03-01

    For the elastic SV (transverse) waves in metals, a high-quality narrow passband filter that consists of aligned parallel thin plates with small gaps is designed. In order to obtain a good performance, the thin plates should be constituted by materials with a smaller mass density and Young's modulus, such as polymethylmethacrylate (PMMA), compared to the embedded materials in which the elastic SV waves propagate. Both the theoretical model and the full numerical simulation show that the transmission spectrum of the designed filter demonstrates several peaks with flawless transmission within 0 KHz ˜20 KHz frequency range. The peaks can be readily tuned by manipulating the geometrical parameters of the plates. Therefore, the current design works well for both low and high frequencies with a controllable size. Even for low frequencies on the order of kilohertz, the size of this filter can be still limited to the order of centimeters, which significantly benefits the real applications. The investigation also finds that the same filter is valid when using different metals and the reason behind this is explained theoretically. Additionally, the effect of bonding conditions of interfaces between thin plates and the base material is investigated using a spring model.

  7. A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation

    NASA Astrophysics Data System (ADS)

    Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao

    2015-03-01

    Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.

  8. Targeted Alteration of Dietary Omega-3 and Omega-6 Fatty Acids for the Treatment of Post-Traumatic Headaches

    DTIC Science & Technology

    2016-10-01

    reduced psychological distress and improved quality-of- life in a chronic headache population. We propose to carry out a 2-arm, parallel group...emphasize the role of inflammation, cytokine modulation, microglial activation, and abnormalities in neurotransmitter activity in mediating PTH. These...anti- and pro-nociceptive lipid mediators and their precursor fatty acids, reduced psychological distress and improved quality-of-life in a chronic

  9. Rapid evaluation and quality control of next generation sequencing data with FaQCs.

    PubMed

    Lo, Chien-Chi; Chain, Patrick S G

    2014-11-19

    Next generation sequencing (NGS) technologies that parallelize the sequencing process and produce thousands to millions, or even hundreds of millions of sequences in a single sequencing run, have revolutionized genomic and genetic research. Because of the vagaries of any platform's sequencing chemistry, the experimental processing, machine failure, and so on, the quality of sequencing reads is never perfect, and often declines as the read is extended. These errors invariably affect downstream analysis/application and should therefore be identified early on to mitigate any unforeseen effects. Here we present a novel FastQ Quality Control Software (FaQCs) that can rapidly process large volumes of data, and which improves upon previous solutions to monitor the quality and remove poor quality data from sequencing runs. Both the speed of processing and the memory footprint of storing all required information have been optimized via algorithmic and parallel processing solutions. The trimmed output compared side-by-side with the original data is part of the automated PDF output. We show how this tool can help data analysis by providing a few examples, including an increased percentage of reads recruited to references, improved single nucleotide polymorphism identification as well as de novo sequence assembly metrics. FaQCs combines several features of currently available applications into a single, user-friendly process, and includes additional unique capabilities such as filtering the PhiX control sequences, conversion of FASTQ formats, and multi-threading. The original data and trimmed summaries are reported within a variety of graphics and reports, providing a simple way to do data quality control and assurance.

  10. Study on the short-term effects of increased alcohol and cigarette consumption in healthy young men’s seminal quality

    PubMed Central

    Silva, Joana Vieira; Cruz, Daniel; Gomes, Mariana; Correia, Bárbara Regadas; Freitas, Maria João; Sousa, Luís; Silva, Vladimiro; Fardilha, Margarida

    2017-01-01

    Many studies have reported a negative impact of lifestyle factors on testicular function, spermatozoa parameters and pituitary-gonadal axis. However, conclusions are difficult to draw, since studies in the general population are rare. In this study we intended to address the early and late short-term impact of acute lifestyle alterations on young men’s reproductive function. Thirty-six healthy male students, who attended the Portuguese academic festivities, provided semen samples and answered questionnaires at three time-points. The consumption of alcohol and cigarette increased more than 8 and 2 times, respectively, during the academic festivities and resulted in deleterious effects on semen quality: one week after the festivities, a decrease on semen volume, spermatozoa motility and normal morphology was observed, in parallel with an increase on immotile spermatozoa, head and midpiece defects and spermatozoa oxidative stress. Additionally, three months after the academic festivities, besides the detrimental effect on volume, motility and morphology, a negative impact on spermatozoa concentration was observed, along with a decrease on epididymal, seminal vesicles and prostate function. This study contributed to understanding the pathophysiology underlying semen quality degradation induced by acute lifestyle alterations, suggesting that high alcohol and cigarette consumption are associated with decreased semen quality in healthy young men. PMID:28367956

  11. Bioreactor process parameter screening utilizing a Plackett-Burman design for a model monoclonal antibody.

    PubMed

    Agarabi, Cyrus D; Schiel, John E; Lute, Scott C; Chavez, Brittany K; Boyne, Michael T; Brorson, Kurt A; Khan, Mansoora; Read, Erik K

    2015-06-01

    Consistent high-quality antibody yield is a key goal for cell culture bioprocessing. This endpoint is typically achieved in commercial settings through product and process engineering of bioreactor parameters during development. When the process is complex and not optimized, small changes in composition and control may yield a finished product of less desirable quality. Therefore, changes proposed to currently validated processes usually require justification and are reported to the US FDA for approval. Recently, design-of-experiments-based approaches have been explored to rapidly and efficiently achieve this goal of optimized yield with a better understanding of product and process variables that affect a product's critical quality attributes. Here, we present a laboratory-scale model culture where we apply a Plackett-Burman screening design to parallel cultures to study the main effects of 11 process variables. This exercise allowed us to determine the relative importance of these variables and identify the most important factors to be further optimized in order to control both desirable and undesirable glycan profiles. We found engineering changes relating to culture temperature and nonessential amino acid supplementation significantly impacted glycan profiles associated with fucosylation, β-galactosylation, and sialylation. All of these are important for monoclonal antibody product quality. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  12. Parallel Guessing: A Strategy for High-Speed Computation

    DTIC Science & Technology

    1984-09-19

    for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or

  13. A high-throughput approach to profile RNA structure.

    PubMed

    Delli Ponti, Riccardo; Marti, Stefanie; Armaos, Alexandros; Tartaglia, Gian Gaetano

    2017-03-17

    Here we introduce the Computational Recognition of Secondary Structure (CROSS) method to calculate the structural profile of an RNA sequence (single- or double-stranded state) at single-nucleotide resolution and without sequence length restrictions. We trained CROSS using data from high-throughput experiments such as Selective 2΄-Hydroxyl Acylation analyzed by Primer Extension (SHAPE; Mouse and HIV transcriptomes) and Parallel Analysis of RNA Structure (PARS; Human and Yeast transcriptomes) as well as high-quality NMR/X-ray structures (PDB database). The algorithm uses primary structure information alone to predict experimental structural profiles with >80% accuracy, showing high performances on large RNAs such as Xist (17 900 nucleotides; Area Under the ROC Curve AUC of 0.75 on dimethyl sulfate (DMS) experiments). We integrated CROSS in thermodynamics-based methods to predict secondary structure and observed an increase in their predictive power by up to 30%. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. A review of the theory, methods and recent applications of high-throughput single-cell droplet microfluidics

    NASA Astrophysics Data System (ADS)

    Lagus, Todd P.; Edd, Jon F.

    2013-03-01

    Most cell biology experiments are performed in bulk cell suspensions where cell secretions become diluted and mixed in a contiguous sample. Confinement of single cells to small, picoliter-sized droplets within a continuous phase of oil provides chemical isolation of each cell, creating individual microreactors where rare cell qualities are highlighted and otherwise undetectable signals can be concentrated to measurable levels. Recent work in microfluidics has yielded methods for the encapsulation of cells in aqueous droplets and hydrogels at kilohertz rates, creating the potential for millions of parallel single-cell experiments. However, commercial applications of high-throughput microdroplet generation and downstream sensing and actuation methods are still emerging for cells. Using fluorescence-activated cell sorting (FACS) as a benchmark for commercially available high-throughput screening, this focused review discusses the fluid physics of droplet formation, methods for cell encapsulation in liquids and hydrogels, sensors and actuators and notable biological applications of high-throughput single-cell droplet microfluidics.

  15. A Compact Microwave Microfluidic Sensor Using a Re-Entrant Cavity.

    PubMed

    Hamzah, Hayder; Abduljabar, Ali; Lees, Jonathan; Porch, Adrian

    2018-03-19

    A miniaturized 2.4 GHz re-entrant cavity has been designed, manufactured and tested as a sensor for microfluidic compositional analysis. It has been fully evaluated experimentally with water and common solvents, namely methanol, ethanol, and chloroform, with excellent agreement with the expected behaviour predicted by the Debye model. The sensor's performance has also been assessed for analysis of segmented flow using water and oil. The samples' interaction with the electric field in the gap region has been maximized by aligning the sample tube parallel to the electric field in this region, and the small width of the gap (typically 1 mm) result in a highly localised complex permittivity measurement. The re-entrant cavity has simple mechanical geometry, small size, high quality factor, and due to the high concentration of electric field in the gap region, a very small mode volume. These factors combine to result in a highly sensitive, compact sensor for both pure liquids and liquid mixtures in capillary or microfluidic environments.

  16. Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopher R. Johnson, Charles D. Hansen

    2001-10-29

    The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less

  17. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  18. Operation of high power converters in parallel

    NASA Technical Reports Server (NTRS)

    Decker, D. K.; Inouye, L. Y.

    1993-01-01

    High power converters that are used in space power subsystems are limited in power handling capability due to component and thermal limitations. For applications, such as Space Station Freedom, where multi-kilowatts of power must be delivered to user loads, parallel operation of converters becomes an attractive option when considering overall power subsystem topologies. TRW developed three different unequal power sharing approaches for parallel operation of converters. These approaches, known as droop, master-slave, and proportional adjustment, are discussed and test results are presented.

  19. Experience in highly parallel processing using DAP

    NASA Technical Reports Server (NTRS)

    Parkinson, D.

    1987-01-01

    Distributed Array Processors (DAP) have been in day to day use for ten years and a large amount of user experience has been gained. The profile of user applications is similar to that of the Massively Parallel Processor (MPP) working group. Experience has shown that contrary to expectations, highly parallel systems provide excellent performance on so-called dirty problems such as the physics part of meteorological codes. The reasons for this observation are discussed. The arguments against replacing bit processors with floating point processors are also discussed.

  20. Parallel Computing for Probabilistic Response Analysis of High Temperature Composites

    NASA Technical Reports Server (NTRS)

    Sues, R. H.; Lua, Y. J.; Smith, M. D.

    1994-01-01

    The objective of this Phase I research was to establish the required software and hardware strategies to achieve large scale parallelism in solving PCM problems. To meet this objective, several investigations were conducted. First, we identified the multiple levels of parallelism in PCM and the computational strategies to exploit these parallelisms. Next, several software and hardware efficiency investigations were conducted. These involved the use of three different parallel programming paradigms and solution of two example problems on both a shared-memory multiprocessor and a distributed-memory network of workstations.

  1. Hierarchical Parallelism in Finite Difference Analysis of Heat Conduction

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph; Krishna, Lala; Gute, Douglas

    1997-01-01

    Based on the concept of hierarchical parallelism, this research effort resulted in highly efficient parallel solution strategies for very large scale heat conduction problems. Overall, the method of hierarchical parallelism involves the partitioning of thermal models into several substructured levels wherein an optimal balance into various associated bandwidths is achieved. The details are described in this report. Overall, the report is organized into two parts. Part 1 describes the parallel modelling methodology and associated multilevel direct, iterative and mixed solution schemes. Part 2 establishes both the formal and computational properties of the scheme.

  2. Critical appraisal of clinical trials in multiple system atrophy: Toward better quality.

    PubMed

    Castro Caldas, Ana; Levin, Johannes; Djaldetti, Ruth; Rascol, Olivier; Wenning, Gregor; Ferreira, Joaquim J

    2017-10-01

    Multiple system atrophy (MSA) is a rare neurodegenerative disease of undetermined cause. Although many clinical trials have been conducted, there is still no treatment that cures the disease or slows its progression. We sought to assess the clinical trials, methodology, and quality of reporting of clinical trails conducted in MSA patients. We conducted a systematic review of all trials with at least 1 MSA patient subject to any pharmacological/nonpharmacological interventions. Two independent reviewers evaluated the methodological characteristics and quality of reporting of trials. A total of 60 clinical trials were identified, including 1375 MSA patients. Of the trials, 51% (n = 31) were single-arm studies. A total of 28% (n = 17) had a parallel design, half of which (n = 13) were placebo controlled. Of the studies, 8 (13.3%) were conducted in a multicenter setting, 3 of which were responsible for 49.3% (n = 678) of the total included MSA patients. The description of primary outcomes was unclear in 60% (n = 40) of trials. Only 10 (16.7%) clinical trials clearly described the randomization process. Blinding of the participants, personnel, and outcome assessments were at high risk of bias in the majority of studies. The number of dropouts/withdrawals was high (n = 326, 23.4% among the included patients). Overall, the design and quality of reporting of the reviewed studies is unsatisfactory. The most frequent clinical trials were small and single centered. Inadequate reporting was related to the information on the randomization process, sequence generation, allocation concealment, blinding of participants, and sample size calculations. Although improved during the recent years, methodological quality and trial design need to be optimized to generate more informative results. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.

  3. Cardiac cine imaging at 3 Tesla: initial experience with a 32-element body-array coil.

    PubMed

    Fenchel, Michael; Deshpande, Vibhas S; Nael, Kambiz; Finn, J Paul; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard

    2006-08-01

    We sought to assess the feasibility of cardiac cine imaging and evaluate image quality at 3 T using a body-array coil with 32 coil elements. Eight healthy volunteers (3 men; median age 29 years) were examined on a 3-T magnetic resonance scanner (Magnetom Trio, Siemens Medical Solutions) using a 32-element phased-array coil (prototype from In vivo Corp.). Gradient-recalled-echo (GRE) cine (GRAPPAx3), GRE cine with tagging lines, steady-state-free-precession (SSFP) cine (GRAPPAx3 and x4), and SSFP cine(TSENSEx4 andx6) images were acquired in short-axis and 4-chamber view. Reference images with identical scan parameters were acquired using the total-imaging-matrix (Tim) coil system with a total of 12 coil elements. Images were assessed by 2 observers in a consensus reading with regard to image quality, noise and presence of artifacts. Furthermore, signal-to-noise values were determined in phantom measurements. In phantom measurements signal-to-noise values were increased by 115-155% for the various cine sequences using the 32-element coil. Scoring of image quality yielded statistically significant increased image quality with the SSFP-GRAPPAx4, SSFP-TSENSEx4, and SSFP-TSENSEx6 sequence using the 32-element coil (P < 0.05). Similarly, scoring of image noise yielded a statistically significant lower noise rating with the SSFP-GRAPPAx4, GRE-GRAPPAx3, SSFP-TSENSEx4, and SSFP-TSENSEx6 sequence using the 32-element coil (P < 0.05). This study shows that cardiac cine imaging at 3 T using a 32-element body-array coil is feasible in healthy volunteers. Using a large number of coil elements with a favorable sensitivity profile supports faster image acquisition, with high diagnostic image quality even for high parallel imaging factors.

  4. Forecasting changes in water quality in rivers associated with growing biofuels in the Arkansas-White-Red river drainage, USA

    DOE PAGES

    Jager, Henriette I.; Baskaran, Latha M.; Schweizer, Peter E.; ...

    2014-05-15

    We study that the mid-section of the Arkansas-White-Red (AWR) river basin near the 100 th parallel is particularly promising for sustainable biomass production using cellulosic perennial crops and residues. Along this longitudinal band, precipitation becomes limiting to competing crops that require irrigation from an increasingly depleted groundwater aquifer. In addition, the deep-rooted perennial, switchgrass, produces modest-to-high yields in this region with minimal inputs and could compete against alternative crops and land uses at relatively low cost. Previous studies have also suggested that switchgrass and other perennial feedstocks offer environmentally benign alternatives to corn and corn stover. However, water quality implicationsmore » remain a significant concern for conversion of marginal lands to bioenergy production because excess nutrients produced by agriculture for food or for energy contribute to eutrophication in the dead-zone in the Gulf of Mexico. This study addresses water quality implications for the AWR river basin. We used the SWAT model to compare water quality in rivers draining a baseline, pre-cellulosic-bioenergy and post-cellulosic-bioenergy landscapes for 2022 and 2030. Simulated water quality responses varied across the region, but with a net tendency toward decreased amounts of nutrient and sediment, particularly in subbasins with large areas of bioenergy crops in 2030 future scenarios. We conclude that water quality is one aspect of sustainability for which cellulosic bioenergy production in this region holds promise.« less

  5. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  6. High-quality eddy-covariance CO2 budgets under cold climate conditions

    NASA Astrophysics Data System (ADS)

    Kittler, Fanny; Eugster, Werner; Foken, Thomas; Heimann, Martin; Kolle, Olaf; Göckede, Mathias

    2017-08-01

    This study aimed at quantifying potential negative effects of instrument heating to improve eddy-covariance flux data quality in cold environments. Our overarching objective was to minimize heating-related bias in annual CO2 budgets from an Arctic permafrost system. We used continuous eddy-covariance measurements covering three full years within an Arctic permafrost ecosystem with parallel sonic anemometers operation with activated heating and without heating as well as parallel operation of open- and closed-path gas analyzers, the latter serving as a reference. Our results demonstrate that the sonic anemometer heating has a direct effect on temperature measurements while the turbulent wind field is not affected. As a consequence, fluxes of sensible heat are increased by an average 5 W m-2 with activated heating, while no direct effect on other scalar fluxes was observed. However, the biased measurements in sensible heat fluxes can have an indirect effect on the CO2 fluxes in case they are used as input for a density-flux WPL correction of an open-path gas analyzer. Evaluating the self-heating effect of the open-path gas analyzer by comparing CO2 flux measurements between open- and closed-path gas analyzers, we found systematically higher CO2 uptake recorded with the open-path sensor, leading to a cumulative annual offset of 96 gC m-2, which was not only the result of the cold winter season but also due to substantial self-heating effects during summer. With an inclined sensor mounting, only a fraction of the self-heating correction for vertically mounted instruments is required.

  7. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  8. Grade pending: lessons for hospital quality reporting from the New York City restaurant sanitation inspection program.

    PubMed

    Ryan, Andrew M; Detsky, Allan S

    2015-02-01

    Public quality reporting programs have been widely implemented in hospitals in an effort to improve quality and safety. One such program is Hospital Compare, Medicare's national quality reporting program for US hospitals. The New York City sanitary grade inspection program is a parallel effort for restaurants. The aims of Hospital Compare and the New York City sanitary inspection program are fundamentally similar: to address a common market failure resulting from consumers' lack of information on quality and safety. However, by displaying easily understandable information at the point of service, the New York City sanitary inspection program is better designed to encourage informed consumer decision making. We argue that this program holds important lessons for public quality reporting of US hospitals. © 2014 Society of Hospital Medicine.

  9. 36 CFR Appendix D to Part 1191 - Technical

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... inch (13 mm) high shall be ramped, and shall comply with 405 or 406. 304Turning Space 304.1General... ground space allows a parallel approach to an element and the side reach is unobstructed, the high side....2Obstructed High Reach. Where a clear floor or ground space allows a parallel approach to an element and the...

  10. Inexact hardware for modelling weather & climate

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, Tim

    2014-05-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.

  11. Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem

    NASA Technical Reports Server (NTRS)

    Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith

    2011-01-01

    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.

  12. New Imaging Strategies Using a Motion-Resistant Liver Sequence in Uncooperative Patients

    PubMed Central

    Kim, Bong Soo; Lee, Kyung Ryeol; Goh, Myeng Ju

    2014-01-01

    MR imaging has unique benefits for evaluating the liver because of its high-resolution capability and ability to permit detailed assessment of anatomic lesions. In uncooperative patients, motion artifacts can impair the image quality and lead to the loss of diagnostic information. In this setting, the recent advances in motion-resistant liver MR techniques, including faster imaging protocols (e.g., dual-echo magnetization-prepared rapid-acquisition gradient echo (MP-RAGE), view-sharing technique), the data under-sampling (e.g., gradient recalled echo (GRE) with controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA), single-shot echo-train spin-echo (SS-ETSE)), and motion-artifact minimization method (e.g., radial GRE with/without k-space-weighted image contrast (KWIC)), can provide consistent, artifact-free images with adequate image quality and can lead to promising diagnostic performance. Understanding of the different motion-resistant options allows radiologists to adopt the most appropriate technique for their clinical practice and thereby significantly improve patient care. PMID:25243115

  13. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  14. Human collagen produced in plants: more than just another molecule.

    PubMed

    Shoseyov, Oded; Posen, Yehudit; Grynspan, Frida

    2014-01-01

    Consequential to its essential role as a mechanical support and affinity regulator in extracellular matrices, collagen constitutes a highly sought after scaffolding material for regeneration and healing applications. However, substantiated concerns have been raised with regard to quality and safety of animal tissue-extracted collagen, particularly in relation to its immunogenicity, risk of disease transmission and overall quality and consistency. In parallel, contamination with undesirable cellular factors can significantly impair its bioactivity, vis-a-vis its impact on cell recruitment, proliferation and differentiation. High-scale production of recombinant human collagen Type I (rhCOL1) in the tobacco plant provides a source of an homogenic, heterotrimeric, thermally stable "virgin" collagen which self assembles to fine homogenous fibrils displaying intact binding sites and has been applied to form numerous functional scaffolds for tissue engineering and regenerative medicine. In addition, rhCOL1 can form liquid crystal structures, yielding a well-organized and mechanically strong membrane, two properties indispensable to extracellular matrix (ECM) mimicry. Overall, the shortcomings of animal- and cadaver-derived collagens arising from their source diversity and recycled nature are fully overcome in the plant setting, constituting a collagen source ideal for tissue engineering and regenerative medicine applications.

  15. Causes of drug shortages in the legal pharmaceutical framework.

    PubMed

    De Weerdt, Elfi; Simoens, Steven; Hombroeckx, Luc; Casteels, Minne; Huys, Isabelle

    2015-03-01

    Different causes of drug shortages can be linked to the pharmaceutical legal framework, such as: parallel trade, quality requirements, economic decisions to suspend or cease production, etc. However until now no in-depth study of the different regulations affecting drug shortages is available. The aim of this paper is to provide an analysis of relevant legal and regulatory measures in the European pharmaceutical framework which influence drug shortages. Different European and national legislations governing human medicinal products were analyzed (e.g. Directive 2001/83/EC and Directive 2011/62/EU), supplemented with literature studies. For patented drugs, external price referencing may encompass the largest impact on drug shortages. For generic medicines, internal or external reference pricing, tendering as well as price capping may affect drug shortages. Manufacturing/quality requirements also contribute to drug shortages, since non-compliance leads to recalls. The influence of parallel trade on drug shortages is still rather disputable. Price and quality regulations are both important causes of drug shortages or drug unavailability. It can be concluded that there is room for improvement in the pharmaceutical legal framework within the lines drawn by the EU to mitigate drug shortages. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Method for fabricating high aspect ratio structures in perovskite material

    DOEpatents

    Karapetrov, Goran T.; Kwok, Wai-Kwong; Crabtree, George W.; Iavarone, Maria

    2003-10-28

    A method of fabricating high aspect ratio ceramic structures in which a selected portion of perovskite or perovskite-like crystalline material is exposed to a high energy ion beam for a time sufficient to cause the crystalline material contacted by the ion beam to have substantially parallel columnar defects. Then selected portions of the material having substantially parallel columnar defects are etched leaving material with and without substantially parallel columnar defects in a predetermined shape having high aspect ratios of not less than 2 to 1. Etching is accomplished by optical or PMMA lithography. There is also disclosed a structure of a ceramic which is superconducting at a temperature in the range of from about 10.degree. K. to about 90.degree. K. with substantially parallel columnar defects in which the smallest lateral dimension of the structure is less than about 5 microns, and the thickness of the structure is greater than 2 times the smallest lateral dimension of the structure.

  17. Parallel implementation of all-digital timing recovery for high-speed and real-time optical coherent receivers.

    PubMed

    Zhou, Xian; Chen, Xue

    2011-05-09

    The digital coherent receivers combine coherent detection with digital signal processing (DSP) to compensate for transmission impairments, and therefore are a promising candidate for future high-speed optical transmission system. However, the maximum symbol rate supported by such real-time receivers is limited by the processing rate of hardware. In order to cope with this difficulty, the parallel processing algorithms is imperative. In this paper, we propose a novel parallel digital timing recovery loop (PDTRL) based on our previous work. Furthermore, for increasing the dynamic dispersion tolerance range of receivers, we embed a parallel adaptive equalizer in the PDTRL. This parallel joint scheme (PJS) can be used to complete synchronization, equalization and polarization de-multiplexing simultaneously. Finally, we demonstrate that PDTRL and PJS allow the hardware to process 112 Gbit/s POLMUX-DQPSK signal at the hundreds MHz range. © 2011 Optical Society of America

  18. The method of parallel-hierarchical transformation for rapid recognition of dynamic images using GPGPU technology

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura

    2016-09-01

    The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.

  19. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    PubMed

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  20. Infant Massage and Quality of Early Mother–Infant Interactions: Are There Associations with Maternal Psychological Wellbeing, Marital Quality, and Social Support?

    PubMed Central

    Porreca, Alessio; Parolin, Micol; Bozza, Giusy; Freato, Susanna; Simonelli, Alessandra

    2017-01-01

    Infant massage programs have proved to be effective in enhancing post-natal development of highly risk infants, such as preterm newborns and drug or HIV exposed children. Less studies have focused on the role of infant massage in supporting the co-construction of early adult–child relationships. In line with this lack of literature, the present paper reports on a pilot study aimed at investigating longitudinally the quality of mother–child interactions, with specific reference to emotional availability (EA), in a group of mother–child pairs involved in infant massage classes. Moreover, associations between mother–child EA, maternal wellbeing, marital adjustment, and social support were also investigated, with the hypothesis to find a link between low maternal distress, high couple satisfaction and high perceived support and interactions of better quality in the dyads. The study involved 20 mothers and their children, aged between 2 and 7 months, who participated to infant massage classes. The assessment took place at three stages: at the beginning of massage course, at the end of it and at 1-month follow-up. At the first stage of assessment self-report questionnaires were administered to examine the presence of maternal psychiatric symptoms (SCL-90-R), perceived social support (MSPSS), and marital adjustment (Dyadic Adjustment Scale); dyadic interactions were observed and rated with the Emotional Availability Scales (Biringen, 2008) at each stage of data collection. The results showed a significant improvement in the quality of mother–child interactions, between the first and the last evaluation, parallel to the unfolding of the massage program, highlighting a general increase in maternal and child’s EA. The presence of maternal psychological distress resulted associated with less optimal mother–child emotional exchanges, while the hypothesis regarding couple satisfaction and social support influence were not confirmed. These preliminary results, if replicated, seem to sustain the usefulness of infant massage and the importance of focusing on early mother–infant interactions. PMID:28144222

  1. Infant Massage and Quality of Early Mother-Infant Interactions: Are There Associations with Maternal Psychological Wellbeing, Marital Quality, and Social Support?

    PubMed

    Porreca, Alessio; Parolin, Micol; Bozza, Giusy; Freato, Susanna; Simonelli, Alessandra

    2016-01-01

    Infant massage programs have proved to be effective in enhancing post-natal development of highly risk infants, such as preterm newborns and drug or HIV exposed children. Less studies have focused on the role of infant massage in supporting the co-construction of early adult-child relationships. In line with this lack of literature, the present paper reports on a pilot study aimed at investigating longitudinally the quality of mother-child interactions, with specific reference to emotional availability (EA), in a group of mother-child pairs involved in infant massage classes. Moreover, associations between mother-child EA, maternal wellbeing, marital adjustment, and social support were also investigated, with the hypothesis to find a link between low maternal distress, high couple satisfaction and high perceived support and interactions of better quality in the dyads. The study involved 20 mothers and their children, aged between 2 and 7 months, who participated to infant massage classes. The assessment took place at three stages: at the beginning of massage course, at the end of it and at 1-month follow-up. At the first stage of assessment self-report questionnaires were administered to examine the presence of maternal psychiatric symptoms (SCL-90-R), perceived social support (MSPSS), and marital adjustment (Dyadic Adjustment Scale); dyadic interactions were observed and rated with the Emotional Availability Scales (Biringen, 2008) at each stage of data collection. The results showed a significant improvement in the quality of mother-child interactions, between the first and the last evaluation, parallel to the unfolding of the massage program, highlighting a general increase in maternal and child's EA. The presence of maternal psychological distress resulted associated with less optimal mother-child emotional exchanges, while the hypothesis regarding couple satisfaction and social support influence were not confirmed. These preliminary results, if replicated, seem to sustain the usefulness of infant massage and the importance of focusing on early mother-infant interactions.

  2. Fully Parallel MHD Stability Analysis Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2014-10-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Initial results of the code parallelization will be reported. Work is supported by the U.S. DOE SBIR program.

  3. Three-Dimensional High-Lift Analysis Using a Parallel Unstructured Multigrid Solver

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1998-01-01

    A directional implicit unstructured agglomeration multigrid solver is ported to shared and distributed memory massively parallel machines using the explicit domain-decomposition and message-passing approach. Because the algorithm operates on local implicit lines in the unstructured mesh, special care is required in partitioning the problem for parallel computing. A weighted partitioning strategy is described which avoids breaking the implicit lines across processor boundaries, while incurring minimal additional communication overhead. Good scalability is demonstrated on a 128 processor SGI Origin 2000 machine and on a 512 processor CRAY T3E machine for reasonably fine grids. The feasibility of performing large-scale unstructured grid calculations with the parallel multigrid algorithm is demonstrated by computing the flow over a partial-span flap wing high-lift geometry on a highly resolved grid of 13.5 million points in approximately 4 hours of wall clock time on the CRAY T3E.

  4. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  5. NAS Parallel Benchmark. Results 11-96: Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.

  6. Parallel seed-based approach to multiple protein structure similarities detection

    DOE PAGES

    Chapuis, Guillaume; Le Boudic-Jamin, Mathilde; Andonov, Rumen; ...

    2015-01-01

    Finding similarities between protein structures is a crucial task in molecular biology. Most of the existing tools require proteins to be aligned in order-preserving way and only find single alignments even when multiple similar regions exist. We propose a new seed-based approach that discovers multiple pairs of similar regions. Its computational complexity is polynomial and it comes with a quality guarantee—the returned alignments have both root mean squared deviations (coordinate-based as well as internal-distances based) lower than a given threshold, if such exist. We do not require the alignments to be order preserving (i.e., we consider nonsequential alignments), which makesmore » our algorithm suitable for detecting similar domains when comparing multidomain proteins as well as to detect structural repetitions within a single protein. Because the search space for nonsequential alignments is much larger than for sequential ones, the computational burden is addressed by extensive use of parallel computing techniques: a coarse-grain level parallelism making use of available CPU cores for computation and a fine-grain level parallelism exploiting bit-level concurrency as well as vector instructions.« less

  7. Performance of a parallel thermal-hydraulics code TEMPEST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fann, G.I.; Trent, D.S.

    The authors describe the parallelization of the Tempest thermal-hydraulics code. The serial version of this code is used for production quality 3-D thermal-hydraulics simulations. Good speedup was obtained with a parallel diagonally preconditioned BiCGStab non-symmetric linear solver, using a spatial domain decomposition approach for the semi-iterative pressure-based and mass-conserved algorithm. The test case used here to illustrate the performance of the BiCGStab solver is a 3-D natural convection problem modeled using finite volume discretization in cylindrical coordinates. The BiCGStab solver replaced the LSOR-ADI method for solving the pressure equation in TEMPEST. BiCGStab also solves the coupled thermal energy equation. Scalingmore » performance of 3 problem sizes (221220 nodes, 358120 nodes, and 701220 nodes) are presented. These problems were run on 2 different parallel machines: IBM-SP and SGI PowerChallenge. The largest problem attains a speedup of 68 on an 128 processor IBM-SP. In real terms, this is over 34 times faster than the fastest serial production time using the LSOR-ADI solver.« less

  8. A new collimator for I-123-IMP SPECT imaging of the brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyamada, H.; Fukukita, H.; Tanaka, E.

    1985-05-01

    At present, commercially available I-123-IMP is contaminated with I-124 and its concentration on the assay date is said to be approximately 5%. Therefore, the application of medium energy parallel hole collimator (MEPC) used in many places for SPECT results in deterioration of the image quality. Recently, the authors have developed a new collimator for I-123-IMP SPECT imaging comprised of 4 slat type units; ultrahigh resolution (UHR), high resolution (HR), high sensitivity (HS), and ultrahigh sensitivity (UHS). The slit width/septum thickness in mm for UHR, HR, HS, and UHS are 0.9/0.5, 1.5/0.85, 3.2/1.5, and 5.2/2.0, respectively. In practice, either UHR ormore » HR is set to the detector (Shimadzu LFOV-E, modified type) together with either HS or UHS. The former is always set to the detector with the slit direction parallel to the rotation axis, and the latter is set with its slit direction at a right angle to the former. This is based on an idea that, upon sacrifice of resolution to some extent, sensitivity can be gained on the axial direction while the resolution on the transaxial slice will still be sufficiently preserved. Resolutions (transaxial direction/axial direction) in FWHM (mm) for each combination (UHR-HS, UHR-UHS, HR-HS, and HR-UHS) were 15.9/31.4, 15.9/36.5,23.2/33.3, and 23.9/40.7, respectively, whereas the resolution of MEPC was 28.7/29.5. On the other hand, relative sensitivities to MEPC were 0.57, 0.86, 0.80, and 1.16. The authors conclude that the combination of UHR and HS is best suited for clinical practice and, at present they are obtaining I-123-IMP SPECT images of good quality.« less

  9. High-performance workplace practices in nursing homes: an economic perspective.

    PubMed

    Bishop, Christine E

    2014-02-01

    To develop implications for research, practice and policy, selected economics and human resources management research literature was reviewed to compare and contrast nursing home culture change work practices with high-performance human resource management systems in other industries. The organization of nursing home work under culture change has much in common with high-performance work systems, which are characterized by increased autonomy for front-line workers, self-managed teams, flattened supervisory hierarchy, and the aspiration that workers use specific knowledge gained on the job to enhance quality and customization. However, successful high-performance work systems also entail intensive recruitment, screening, and on-going training of workers, and compensation that supports selective hiring and worker commitment; these features are not usual in the nursing home sector. Thus despite many parallels with high-performance work systems, culture change work systems are missing essential elements: those that require higher compensation. If purchasers, including public payers, were willing to pay for customized, resident-centered care, productivity gains could be shared with workers, and the nursing home sector could move from a low-road to a high-road employment system.

  10. Context, culture and (non-verbal) communication affect handover quality.

    PubMed

    Frankel, Richard M; Flanagan, Mindy; Ebright, Patricia; Bergman, Alicia; O'Brien, Colleen M; Franks, Zamal; Allen, Andrew; Harris, Angela; Saleem, Jason J

    2012-12-01

    Transfers of care, also known as handovers, remain a substantial patient safety risk. Although research on handovers has been done since the 1980s, the science is incomplete. Surprisingly few interventions have been rigorously evaluated and, of those that have, few have resulted in long-term positive change. Researchers, both in medicine and other high reliability industries, agree that face-to-face handovers are the most reliable. It is not clear, however, what the term face-to-face means in actual practice. We studied the use of non-verbal behaviours, including gesture, posture, bodily orientation, facial expression, eye contact and physical distance, in the delivery of information during face-to-face handovers. To address this question and study the role of non-verbal behaviour on the quality and accuracy of handovers, we videotaped 52 nursing, medicine and surgery handovers covering 238 patients. Videotapes were analysed using immersion/crystallisation methods of qualitative data analysis. A team of six researchers met weekly for 18 months to view videos together using a consensus-building approach. Consensus was achieved on verbal, non-verbal, and physical themes and patterns observed in the data. We observed four patterns of non-verbal behaviour (NVB) during handovers: (1) joint focus of attention; (2) 'the poker hand'; (3) parallel play and (4) kerbside consultation. In terms of safety, joint focus of attention was deemed to have the best potential for high quality and reliability; however, it occurred infrequently, creating opportunities for education and improvement. Attention to patterns of NVB in face-to-face handovers coupled with education and practice can improve quality and reliability.

  11. Characterization of the seismically imaged Tuscarora fold system and implications for layer parallel shortening in the Pennsylvania salient

    NASA Astrophysics Data System (ADS)

    Mount, Van S.; Wilkins, Scott; Comiskey, Cody S.

    2017-12-01

    The Tuscarora fold system (TFS) is located in the Pennsylvania salient in the foreland of the Valley and Ridge province. The TFS is imaged in high quality 3D seismic data and comprises a system of small-scale folds within relatively flat-lying Lower Silurian Tuscarora Formation strata. We characterize the TFS structures and infer layer parallel shortening (LPS) directions and magnitudes associated with deformation during the Alleghany Orogeny. Previously reported LPS data in our study area are from shallow Devonian and Carboniferous strata (based on outcrop and core analyses) above the shallowest of three major detachments recognized in the region. Seismic data allows us to characterize LPS at depth in strata beneath the shallow detachment. Our LPS data (orientations and inferred magnitudes) are consistent with the shallow data leading us to surmise that LPS during Alleghanian deformation fanned around the salient and was distributed throughout the stratigraphic section - and not isolated to strata above the shallow detachment. We propose that a NW-SE oriented Alleghanian maximum principal stress was perturbed by deep structure associated with the non-linear margin of Laurentia resulting in fanning of shortening directions within the salient.

  12. Evaluation of Proteus as a Tool for the Rapid Development of Models of Hydrologic Systems

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Farthing, M. W.; Kees, C. E.; Miller, C. T.

    2013-12-01

    Models of modern hydrologic systems can be complex and involve a variety of operators with varying character. The goal is to implement approximations of such models that are both efficient for the developer and computationally efficient, which is a set of naturally competing objectives. Proteus is a Python-based toolbox that supports prototyping of model formulations as well as a wide variety of modern numerical methods and parallel computing. We used Proteus to develop numerical approximations for three models: Richards' equation, a brine flow model derived using the Thermodynamically Constrained Averaging Theory (TCAT), and a multiphase TCAT-based tumor growth model. For Richards' equation, we investigated discontinuous Galerkin solutions with higher order time integration based on the backward difference formulas. The TCAT brine flow model was implemented using Proteus and a variety of numerical methods were compared to hand coded solutions. Finally, an existing tumor growth model was implemented in Proteus to introduce more advanced numerics and allow the code to be run in parallel. From these three example models, Proteus was found to be an attractive open-source option for rapidly developing high quality code for solving existing and evolving computational science models.

  13. BPF-type region-of-interest reconstruction for parallel translational computed tomography.

    PubMed

    Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin

    2017-01-01

    The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.

  14. Multiplex single-molecule interaction profiling of DNA-barcoded proteins.

    PubMed

    Gu, Liangcai; Li, Chao; Aach, John; Hill, David E; Vidal, Marc; Church, George M

    2014-11-27

    In contrast with advances in massively parallel DNA sequencing, high-throughput protein analyses are often limited by ensemble measurements, individual analyte purification and hence compromised quality and cost-effectiveness. Single-molecule protein detection using optical methods is limited by the number of spectrally non-overlapping chromophores. Here we introduce a single-molecular-interaction sequencing (SMI-seq) technology for parallel protein interaction profiling leveraging single-molecule advantages. DNA barcodes are attached to proteins collectively via ribosome display or individually via enzymatic conjugation. Barcoded proteins are assayed en masse in aqueous solution and subsequently immobilized in a polyacrylamide thin film to construct a random single-molecule array, where barcoding DNAs are amplified into in situ polymerase colonies (polonies) and analysed by DNA sequencing. This method allows precise quantification of various proteins with a theoretical maximum array density of over one million polonies per square millimetre. Furthermore, protein interactions can be measured on the basis of the statistics of colocalized polonies arising from barcoding DNAs of interacting proteins. Two demanding applications, G-protein coupled receptor and antibody-binding profiling, are demonstrated. SMI-seq enables 'library versus library' screening in a one-pot assay, simultaneously interrogating molecular binding affinity and specificity.

  15. Multiplex single-molecule interaction profiling of DNA barcoded proteins

    PubMed Central

    Gu, Liangcai; Li, Chao; Aach, John; Hill, David E.; Vidal, Marc; Church, George M.

    2014-01-01

    In contrast with advances in massively parallel DNA sequencing1, high-throughput protein analyses2-4 are often limited by ensemble measurements, individual analyte purification and hence compromised quality and cost-effectiveness. Single-molecule (SM) protein detection achieved using optical methods5 is limited by the number of spectrally nonoverlapping chromophores. Here, we introduce a single molecular interaction-sequencing (SMI-Seq) technology for parallel protein interaction profiling leveraging SM advantages. DNA barcodes are attached to proteins collectively via ribosome display6 or individually via enzymatic conjugation. Barcoded proteins are assayed en masse in aqueous solution and subsequently immobilized in a polyacrylamide (PAA) thin film to construct a random SM array, where barcoding DNAs are amplified into in situ polymerase colonies (polonies)7 and analyzed by DNA sequencing. This method allows precise quantification of various proteins with a theoretical maximum array density of over one million polonies per square millimeter. Furthermore, protein interactions can be measured based on the statistics of colocalized polonies arising from barcoding DNAs of interacting proteins. Two demanding applications, G-protein coupled receptor (GPCR) and antibody binding profiling, were demonstrated. SMI-Seq enables “library vs. library” screening in a one-pot assay, simultaneously interrogating molecular binding affinity and specificity. PMID:25252978

  16. Combined algorithmic and GPU acceleration for ultra-fast circular conebeam backprojection

    NASA Astrophysics Data System (ADS)

    Brokish, Jeffrey; Sack, Paul; Bresler, Yoram

    2010-04-01

    In this paper, we describe the first implementation and performance of a fast O(N3logN) hierarchical backprojection algorithm for cone beam CT with a circular trajectory1,developed on a modern Graphics Processing Unit (GPU). The resulting tomographic backprojection system for 3D cone beam geometry combines speedup through algorithmic improvements provided by the hierarchical backprojection algorithm with speedup from a massively parallel hardware accelerator. For data parameters typical in diagnostic CT and using a mid-range GPU card, we report reconstruction speeds of up to 360 frames per second, and relative speedup of almost 6x compared to conventional backprojection on the same hardware. The significance of these results is twofold. First, they demonstrate that the reduction in operation counts demonstrated previously for the FHBP algorithm can be translated to a comparable run-time improvement in a massively parallel hardware implementation, while preserving stringent diagnostic image quality. Second, the dramatic speedup and throughput numbers achieved indicate the feasibility of systems based on this technology, which achieve real-time 3D reconstruction for state-of-the art diagnostic CT scanners with small footprint, high-reliability, and affordable cost.

  17. The history of MR imaging as seen through the pages of radiology.

    PubMed

    Edelman, Robert R

    2014-11-01

    The first reports in Radiology pertaining to magnetic resonance (MR) imaging were published in 1980, 7 years after Paul Lauterbur pioneered the first MR images and 9 years after the first human computed tomographic images were obtained. Historical advances in the research and clinical applications of MR imaging very much parallel the remarkable advances in MR imaging technology. These advances can be roughly classified into hardware (eg, magnets, gradients, radiofrequency [RF] coils, RF transmitter and receiver, MR imaging-compatible biopsy devices) and imaging techniques (eg, pulse sequences, parallel imaging, and so forth). Image quality has been dramatically improved with the introduction of high-field-strength superconducting magnets, digital RF systems, and phased-array coils. Hybrid systems, such as MR/positron emission tomography (PET), combine the superb anatomic and functional imaging capabilities of MR imaging with the unsurpassed capability of PET to demonstrate tissue metabolism. Supported by the improvements in hardware, advances in pulse sequence design and image reconstruction techniques have spurred dramatic improvements in imaging speed and the capability for studying tissue function. In this historical review, the history of MR imaging technology and developing research and clinical applications, as seen through the pages of Radiology, will be considered.

  18. BLIPPED (BLIpped Pure Phase EncoDing) high resolution MRI with low amplitude gradients

    NASA Astrophysics Data System (ADS)

    Xiao, Dan; Balcom, Bruce J.

    2017-12-01

    MRI image resolution is proportional to the maximum k-space value, i.e. the temporal integral of the magnetic field gradient. High resolution imaging usually requires high gradient amplitudes and/or long spatial encoding times. Special gradient hardware is often required for high amplitudes and fast switching. We propose a high resolution imaging sequence that employs low amplitude gradients. This method was inspired by the previously proposed PEPI (π Echo Planar Imaging) sequence, which replaced EPI gradient reversals with multiple RF refocusing pulses. It has been shown that when the refocusing RF pulse is of high quality, i.e. sufficiently close to 180°, the magnetization phase introduced by the spatial encoding magnetic field gradient can be preserved and transferred to the following echo signal without phase rewinding. This phase encoding scheme requires blipped gradients that are identical for each echo, with low and constant amplitude, providing opportunities for high resolution imaging. We now extend the sequence to 3D pure phase encoding with low amplitude gradients. The method is compared with the Hybrid-SESPI (Spin Echo Single Point Imaging) technique to demonstrate the advantages in terms of low gradient duty cycle, compensation of concomitant magnetic field effects and minimal echo spacing, which lead to superior image quality and high resolution. The 3D imaging method was then applied with a parallel plate resonator RF probe, achieving a nominal spatial resolution of 17 μm in one dimension in the 3D image, requiring a maximum gradient amplitude of only 5.8 Gauss/cm.

  19. Effect of flashlight guidance on manual ventilation performance in cardiopulmonary resuscitation: A randomized controlled simulation study.

    PubMed

    Kim, Ji Hoon; Beom, Jin Ho; You, Je Sung; Cho, Junho; Min, In Kyung; Chung, Hyun Soo

    2018-01-01

    Several auditory-based feedback devices have been developed to improve the quality of ventilation performance during cardiopulmonary resuscitation (CPR), but their effectiveness has not been proven in actual CPR situations. In the present study, we investigated the effectiveness of visual flashlight guidance in maintaining high-quality ventilation performance. We conducted a simulation-based, randomized, parallel trial including 121 senior medical students. All participants were randomized to perform ventilation during 2 minutes of CPR with or without flashlight guidance. For each participant, we measured mean ventilation rate as a primary outcome and ventilation volume, inspiration velocity, and ventilation interval as secondary outcomes using a computerized device system. Mean ventilation rate did not significantly differ between flashlight guidance and control groups (P = 0.159), but participants in the flashlight guidance group exhibited significantly less variation in ventilation rate than participants in the control group (P<0.001). Ventilation interval was also more regular among participants in the flashlight guidance group. Our results demonstrate that flashlight guidance is effective in maintaining a constant ventilation rate and interval. If confirmed by further studies in clinical practice, flashlight guidance could be expected to improve the quality of ventilation performed during CPR.

  20. Why is intelligence correlated with semen quality?

    PubMed Central

    Miller, Geoffrey; Arden, Rosalind; Gottfredson, Linda S

    2009-01-01

    We recently found positive correlations between human general intelligence and three key indices of semen quality, and hypothesized that these correlations arise through a phenotype-wide ‘general fitness factor’ reflecting overall mutation load. In this addendum we consider some of the biochemical pathways that may act as targets for pleiotropic mutations that disrupt both neuron function and sperm function in parallel. We focus especially on the inter-related roles of polyunsaturated fatty acids, exocytosis and receptor signaling. PMID:19907694

  1. Common sense about taste: from mammals to insects.

    PubMed

    Yarmolinsky, David A; Zuker, Charles S; Ryba, Nicholas J P

    2009-10-16

    The sense of taste is a specialized chemosensory system dedicated to the evaluation of food and drink. Despite the fact that vertebrates and insects have independently evolved distinct anatomic and molecular pathways for taste sensation, there are clear parallels in the organization and coding logic between the two systems. There is now persuasive evidence that tastant quality is mediated by labeled lines, whereby distinct and strictly segregated populations of taste receptor cells encode each of the taste qualities.

  2. Improving quality of arterial spin labeling MR imaging at 3 Tesla with a 32-channel coil and parallel imaging.

    PubMed

    Ferré, Jean-Christophe; Petr, Jan; Bannier, Elise; Barillot, Christian; Gauvrit, Jean-Yves

    2012-05-01

    To compare 12-channel and 32-channel phased-array coils and to determine the optimal parallel imaging (PI) technique and factor for brain perfusion imaging using Pulsed Arterial Spin labeling (PASL) at 3 Tesla (T). Twenty-seven healthy volunteers underwent 10 different PASL perfusion PICORE Q2TIPS scans at 3T using 12-channel and 32-channel coils without PI and with GRAPPA or mSENSE using factor 2. PI with factor 3 and 4 were used only with the 32-channel coil. Visual quality was assessed using four parameters. Quantitative analyses were performed using temporal noise, contrast-to-noise and signal-to-noise ratios (CNR, SNR). Compared with 12-channel acquisition, the scores for 32-channel acquisition were significantly higher for overall visual quality, lower for noise and higher for SNR and CNR. With the 32-channel coil, artifact compromise achieved the best score with PI factor 2. Noise increased, SNR and CNR decreased with PI factor. However mSENSE 2 scores were not always significantly different from acquisition without PI. For PASL at 3T, the 32-channel coil at 3T provided better quality than the 12-channel coil. With the 32-channel coil, mSENSE 2 seemed to offer the best compromise for decreasing artifacts without significantly reducing SNR, CNR. Copyright © 2012 Wiley Periodicals, Inc.

  3. An iterative reduced field-of-view reconstruction for periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI.

    PubMed

    Lin, Jyh-Miin; Patterson, Andrew J; Chang, Hing-Chiu; Gillard, Jonathan H; Graves, Martin J

    2015-10-01

    To propose a new reduced field-of-view (rFOV) strategy for iterative reconstructions in a clinical environment. Iterative reconstructions can incorporate regularization terms to improve the image quality of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI. However, the large amount of calculations required for full FOV iterative reconstructions has posed a huge computational challenge for clinical usage. By subdividing the entire problem into smaller rFOVs, the iterative reconstruction can be accelerated on a desktop with a single graphic processing unit (GPU). This rFOV strategy divides the iterative reconstruction into blocks, based on the block-diagonal dominant structure. A near real-time reconstruction system was developed for the clinical MR unit, and parallel computing was implemented using the object-oriented model. In addition, the Toeplitz method was implemented on the GPU to reduce the time required for full interpolation. Using the data acquired from the PROPELLER MRI, the reconstructed images were then saved in the digital imaging and communications in medicine format. The proposed rFOV reconstruction reduced the gridding time by 97%, as the total iteration time was 3 s even with multiple processes running. A phantom study showed that the structure similarity index for rFOV reconstruction was statistically superior to conventional density compensation (p < 0.001). In vivo study validated the increased signal-to-noise ratio, which is over four times higher than with density compensation. Image sharpness index was improved using the regularized reconstruction implemented. The rFOV strategy permits near real-time iterative reconstruction to improve the image quality of PROPELLER images. Substantial improvements in image quality metrics were validated in the experiments. The concept of rFOV reconstruction may potentially be applied to other kinds of iterative reconstructions for shortened reconstruction duration.

  4. Parallel integer sorting with medium and fine-scale parallelism

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1993-01-01

    Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

  5. Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors

    NASA Astrophysics Data System (ADS)

    Yi, Hongsuk

    2014-03-01

    Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.

  6. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  7. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  8. Breakdown of Spatial Parallel Coding in Children's Drawing

    ERIC Educational Resources Information Center

    De Bruyn, Bart; Davis, Alyson

    2005-01-01

    When drawing real scenes or copying simple geometric figures young children are highly sensitive to parallel cues and use them effectively. However, this sensitivity can break down in surprisingly simple tasks such as copying a single line where robust directional errors occur despite the presence of parallel cues. Before we can conclude that this…

  9. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skinner, David; Verdier, Francesca; Anand, Harsh

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems.more » An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.« less

  10. High voltage pulse generator

    DOEpatents

    Fasching, George E.

    1977-03-08

    An improved high-voltage pulse generator has been provided which is especially useful in ultrasonic testing of rock core samples. An N number of capacitors are charged in parallel to V volts and at the proper instance are coupled in series to produce a high-voltage pulse of N times V volts. Rapid switching of the capacitors from the paralleled charging configuration to the series discharging configuration is accomplished by using silicon-controlled rectifiers which are chain self-triggered following the initial triggering of a first one of the rectifiers connected between the first and second of the plurality of charging capacitors. A timing and triggering circuit is provided to properly synchronize triggering pulses to the first SCR at a time when the charging voltage is not being applied to the parallel-connected charging capacitors. Alternate circuits are provided for controlling the application of the charging voltage from a charging circuit to be applied to the parallel capacitors which provides a selection of at least two different intervals in which the charging voltage is turned "off" to allow the SCR's connecting the capacitors in series to turn "off" before recharging begins. The high-voltage pulse-generating circuit including the N capacitors and corresponding SCR's which connect the capacitors in series when triggered "on" further includes diodes and series-connected inductors between the parallel-connected charging capacitors which allow sufficiently fast charging of the capacitors for a high pulse repetition rate and yet allow considerable control of the decay time of the high-voltage pulses from the pulse-generating circuit.

  11. Hydrological Controls on Dissolved Organic Matter Quality and Export in a Coastal River System in Southeastern USA

    NASA Astrophysics Data System (ADS)

    Bhattacharya, R.; Osburn, C. L.

    2017-12-01

    Dissolved organic matter (DOM) exported from river catchments can influence the biogeochemical processes in coastal environments with implications for water quality and carbon budget. High flow conditions are responsible for most DOM export ("pulses") from watersheds, and these events reduce DOM transformation and production by "shunting" DOM from river networks into coastal waters: the Pulse-Shunt Concept (PSC). Subsequently, the source and quality of DOM is also expected to change as a function of river flow. Here, we used stream dissolved organic carbon concentrations ([DOC]) along with DOM optical properties, such as absorbance at 350 nm (a350) and fluorescence excitation and emission matrices modeled by parallel factor analysis (PARAFAC), to characterize DOM source, quality and fluxes under variable flow conditions for the Neuse River, a coastal river system in the southeastern US. Observations were made at a flow gauged station above head of tide periodically between Aug 2011 and Feb 2013, which captured low flow periods in summer and several high flow events including Hurricane Irene. [DOC] and a350 were correlated and varied positively with river flow, implying that a large portion of the DOM was colored, humic and flow-mobilized. During high flow conditions, PARAFAC results demonstrated the higher influx of terrestrial humic DOM, and lower in-stream phytoplankton production or microbial degradation. However, during low flow, DOM transformation and production increased in response to higher residence times and elevated productivity. Further, 70% of the DOC was exported by above average flows, where 3-4 fold increases in DOC fluxes were observed during episodic events, consistent with PSC. These results imply that storms dramatically affects DOM export to coastal waters, whereby high river flow caused by episodic events primarily shunt terrestrial DOM to coastal waters, whereas low flow promotes in-stream DOM transformation and amendment with microbial DOM.

  12. Reliability of an e-PRO Tool of EORTC QLQ-C30 for Measurement of Health-Related Quality of Life in Patients With Breast Cancer: Prospective Randomized Trial.

    PubMed

    Wallwiener, Markus; Matthies, Lina; Simoes, Elisabeth; Keilmann, Lucia; Hartkopf, Andreas D; Sokolov, Alexander N; Walter, Christina B; Sickenberger, Nina; Wallwiener, Stephanie; Feisst, Manuel; Gass, Paul; Fasching, Peter A; Lux, Michael P; Wallwiener, Diethelm; Taran, Florin-Andrei; Rom, Joachim; Schneeweiss, Andreas; Graf, Joachim; Brucker, Sara Y

    2017-09-14

    Breast cancer represents the most common malignant disease in women worldwide. As currently systematic palliative treatment only has a limited effect on survival rates, the concept of health-related quality of life (HRQoL) is gaining more and more importance in the therapy setting of metastatic breast cancer. One of the major patient-reported outcomes (PROs) for measuring HRQoL in patients with breast cancer is provided by the European Organization for Research and Treatment of Cancer (EORTC). Currently, paper-based surveys still predominate, as only a few reliable and validated electronic-based questionnaires are available. Facing the possibilities associated with evolving digitalization in medicine, validation of electronic versions of well-established PRO is essential in order to contribute to comprehensive and holistic oncological care and to ensure high quality in cancer research. The aim of this study was to analyze the reliability of a tablet-based measuring application for EORTC QLQ-C30 in German language in patients with adjuvant and (curative) metastatic breast cancer. Paper- and tablet-based questionnaires were completed by a total of 106 female patients with adjuvant and metastatic breast cancer recruited as part of the e-PROCOM study. All patients were required to complete the electronic- (e-PRO) and paper-based versions of the HRQoL EORTC QLQ-C30 questionnaire. A frequency analysis was performed to determine descriptive sociodemographic characteristics. Both dimensions of reliability (parallel forms reliability [Wilcoxon test] and test of internal consistency [Spearman rho and agreement rates for single items, Pearson correlation and Kendall tau for each scale]) were analyzed. High correlations were shown for both dimensions of reliability (parallel forms reliability and internal consistency) in the patient's response behavior between paper- and electronic-based questionnaires. Regarding the test of parallel forms reliability, no significant differences were found in 27 of 30 single items and in 14 of 15 scales, whereas a statistically significant correlation in the test of consistency was found in all 30 single items and all 15 scales. The evaluated e-PRO version of the EORTC QLQ-C30 is reliable for patients with both adjuvant and metastatic breast cancer, showing a high correlation in almost all questions (and in many scales). Thus, we conclude that the validated paper-based PRO assessment and the e-PRO tool are equally valid. However, the reliability should also be analyzed in other prospective trials to ensure that usability is reliable in all patient groups. ClinicalTrials.gov NCT03132506; https://clinicaltrials.gov/ct2/show/NCT03132506 (Archived by WebCite at http://www.webcitation.org/6tRcgQuou). ©Markus Wallwiener, Lina Matthies, Elisabeth Simoes, Lucia Keilmann, Andreas D Hartkopf, Alexander N Sokolov, Christina B Walter, Nina Sickenberger, Stephanie Wallwiener, Manuel Feisst, Paul Gass, Peter A Fasching, Michael P Lux, Diethelm Wallwiener, Florin-Andrei Taran, Joachim Rom, Andreas Schneeweiss, Joachim Graf, Sara Y Brucker. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 14.09.2017.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, David H.

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less

  14. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  15. Performance of multi-hop parallel free-space optical communication over gamma-gamma fading channel with pointing errors.

    PubMed

    Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei

    2016-11-10

    Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.

  16. Web Based Parallel Programming Workshop for Undergraduate Education.

    ERIC Educational Resources Information Center

    Marcus, Robert L.; Robertson, Douglass

    Central State University (Ohio), under a contract with Nichols Research Corporation, has developed a World Wide web based workshop on high performance computing entitled "IBN SP2 Parallel Programming Workshop." The research is part of the DoD (Department of Defense) High Performance Computing Modernization Program. The research…

  17. High-resolution magnetic resonance angiography of the lower extremities with a dedicated 36-element matrix coil at 3 Tesla.

    PubMed

    Kramer, Harald; Michaely, Henrik J; Matschl, Volker; Schmitt, Peter; Reiser, Maximilian F; Schoenberg, Stefan O

    2007-06-01

    Recent developments in hard- and software help to significantly increase image quality of magnetic resonance angiography (MRA). Parallel acquisition techniques (PAT) help to increase spatial resolution and to decrease acquisition time but also suffer from a decrease in signal-to-noise ratio (SNR). The movement to higher field strength and the use of dedicated angiography coils can further increase spatial resolution while decreasing acquisition times at the same SNR as it is known from contemporary exams. The goal of our study was to compare the image quality of MRA datasets acquired with a standard matrix coil in comparison to MRA datasets acquired with a dedicated peripheral angio matrix coil and higher factors of parallel imaging. Before the first volunteer examination, unaccelerated phantom measurements were performed with the different coils. After institutional review board approval, 15 healthy volunteers underwent MRA of the lower extremity on a 32 channel 3.0 Tesla MR System. In 5 of them MRA of the calves was performed with a PAT acceleration factor of 2 and a standard body-matrix surface coil placed at the legs. Ten volunteers underwent MRA of the calves with a dedicated 36-element angiography matrix coil: 5 with a PAT acceleration of 3 and 5 with a PAT acceleration factor of 4, respectively. The acquired volume and acquisition time was approximately the same in all examinations, only the spatial resolution was increased with the acceleration factor. The acquisition time per voxel was calculated. Image quality was rated independently by 2 readers in terms of vessel conspicuity, venous overlay, and occurrence of artifacts. The inter-reader agreement was calculated by the kappa-statistics. SNR and contrast-to-noise ratios from the different examinations were evaluated. All 15 volunteers completed the examination, no adverse events occurred. None of the examinations showed venous overlay; 70% of the examinations showed an excellent vessel conspicuity, whereas in 50% of the examinations artifacts occurred. All of these artifacts were judged as none disturbing. Inter-reader agreement was good with kappa values ranging between 0.65 and 0.74. SNR and contrast-to-noise ratios did not show significant differences. Implementation of a dedicated coil for peripheral MRA at 3.0 Tesla helps to increase spatial resolution and to decrease acquisition time while the image quality could be kept equal. Venous overlay can be effectively avoided despite the use of high-resolution scans.

  18. Incremental Parallelization of Non-Data-Parallel Programs Using the Charon Message-Passing Library

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.

    2000-01-01

    Message passing is among the most popular techniques for parallelizing scientific programs on distributed-memory architectures. The reasons for its success are wide availability (MPI), efficiency, and full tuning control provided to the programmer. A major drawback, however, is that incremental parallelization, as offered by compiler directives, is not generally possible, because all data structures have to be changed throughout the program simultaneously. Charon remedies this situation through mappings between distributed and non-distributed data. It allows breaking up the parallelization into small steps, guaranteeing correctness at every stage. Several tools are available to help convert legacy codes into high-performance message-passing programs. They usually target data-parallel applications, whose loops carrying most of the work can be distributed among all processors without much dependency analysis. Others do a full dependency analysis and then convert the code virtually automatically. Even more toolkits are available that aid construction from scratch of message passing programs. None, however, allows piecemeal translation of codes with complex data dependencies (i.e. non-data-parallel programs) into message passing codes. The Charon library (available in both C and Fortran) provides incremental parallelization capabilities by linking legacy code arrays with distributed arrays. During the conversion process, non-distributed and distributed arrays exist side by side, and simple mapping functions allow the programmer to switch between the two in any location in the program. Charon also provides wrapper functions that leave the structure of the legacy code intact, but that allow execution on truly distributed data. Finally, the library provides a rich set of communication functions that support virtually all patterns of remote data demands in realistic structured grid scientific programs, including transposition, nearest-neighbor communication, pipelining, gather/scatter, and redistribution. At the end of the conversion process most intermediate Charon function calls will have been removed, the non-distributed arrays will have been deleted, and virtually the only remaining Charon functions calls are the high-level, highly optimized communications. Distribution of the data is under complete control of the programmer, although a wide range of useful distributions is easily available through predefined functions. A crucial aspect of the library is that it does not allocate space for distributed arrays, but accepts programmer-specified memory. This has two major consequences. First, codes parallelized using Charon do not suffer from encapsulation; user data is always directly accessible. This provides high efficiency, and also retains the possibility of using message passing directly for highly irregular communications. Second, non-distributed arrays can be interpreted as (trivial) distributions in the Charon sense, which allows them to be mapped to truly distributed arrays, and vice versa. This is the mechanism that enables incremental parallelization. In this paper we provide a brief introduction of the library and then focus on the actual steps in the parallelization process, using some representative examples from, among others, the NAS Parallel Benchmarks. We show how a complicated two-dimensional pipeline-the prototypical non-data-parallel algorithm- can be constructed with ease. To demonstrate the flexibility of the library, we give examples of the stepwise, efficient parallel implementation of nonlocal boundary conditions common in aircraft simulations, as well as the construction of the sequence of grids required for multigrid.

  19. Execution of parallel algorithms on a heterogeneous multicomputer

    NASA Astrophysics Data System (ADS)

    Isenstein, Barry S.; Greene, Jonathon

    1995-04-01

    Many aerospace/defense sensing and dual-use applications require high-performance computing, extensive high-bandwidth interconnect and realtime deterministic operation. This paper will describe the architecture of a scalable multicomputer that includes DSP and RISC processors. A single chassis implementation is capable of delivering in excess of 10 GFLOPS of DSP processing power with 2 Gbytes/s of realtime sensor I/O. A software approach to implementing parallel algorithms called the Parallel Application System (PAS) is also presented. An example of applying PAS to a DSP application is shown.

  20. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  1. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  2. Finite fringe hologram

    NASA Technical Reports Server (NTRS)

    Heflinger, L. O.

    1970-01-01

    In holographic interferometry a small movement of apparatus between exposures causes the background of the reconstructed scene to be covered with interference fringes approximately parallel to each other. The three-dimensional quality of the holographic image is allowable since a mathematical model will give the location of the fringes.

  3. Dynamic performance of high speed solenoid valve with parallel coils

    NASA Astrophysics Data System (ADS)

    Kong, Xiaowu; Li, Shizhen

    2014-07-01

    The methods of improving the dynamic performance of high speed on/off solenoid valve include increasing the magnetic force of armature and the slew rate of coil current, decreasing the mass and stroke of moving parts. The increase of magnetic force usually leads to the decrease of current slew rate, which could increase the delay time of the dynamic response of solenoid valve. Using a high voltage to drive coil can solve this contradiction, but a high driving voltage can also lead to more cost and a decrease of safety and reliability. In this paper, a new scheme of parallel coils is investigated, in which the single coil of solenoid is replaced by parallel coils with same ampere turns. Based on the mathematic model of high speed solenoid valve, the theoretical formula for the delay time of solenoid valve is deduced. Both the theoretical analysis and the dynamic simulation show that the effect of dividing a single coil into N parallel sub-coils is close to that of driving the single coil with N times of the original driving voltage as far as the delay time of solenoid valve is concerned. A specific test bench is designed to measure the dynamic performance of high speed on/off solenoid valve. The experimental results also prove that both the delay time and switching time of the solenoid valves can be decreased greatly by adopting the parallel coil scheme. This research presents a simple and practical method to improve the dynamic performance of high speed on/off solenoid valve.

  4. The impact of traffic-flow patterns on air quality in urban street canyons.

    PubMed

    Thaker, Prashant; Gokhale, Sharad

    2016-01-01

    We investigated the effect of different urban traffic-flow patterns on pollutant dispersion in different winds in a real asymmetric street canyon. Free-flow traffic causes more turbulence in the canyon facilitating more dispersion and a reduction in pedestrian level concentration. The comparison of with and without a vehicle-induced-turbulence revealed that when winds were perpendicular, the free-flow traffic reduced the concentration by 73% on the windward side with a minor increase of 17% on the leeward side, whereas for parallel winds, it reduced the concentration by 51% and 29%. The congested-flow traffic increased the concentrations on the leeward side by 47% when winds were perpendicular posing a higher risk to health, whereas reduced it by 17-42% for parallel winds. The urban air quality and public health can, therefore, be improved by improving the traffic-flow patterns in street canyons as vehicle-induced turbulence has been shown to contribute significantly to dispersion. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

  6. High-throughput cultivation and screening platform for unicellular phototrophs.

    PubMed

    Tillich, Ulrich M; Wolter, Nick; Schulze, Katja; Kramer, Dan; Brödel, Oliver; Frohme, Marcus

    2014-09-16

    High-throughput cultivation and screening methods allow a parallel, miniaturized and cost efficient processing of many samples. These methods however, have not been generally established for phototrophic organisms such as microalgae or cyanobacteria. In this work we describe and test high-throughput methods with the model organism Synechocystis sp. PCC6803. The required technical automation for these processes was achieved with a Tecan Freedom Evo 200 pipetting robot. The cultivation was performed in 2.2 ml deepwell microtiter plates within a cultivation chamber outfitted with programmable shaking conditions, variable illumination, variable temperature, and an adjustable CO2 atmosphere. Each microtiter-well within the chamber functions as a separate cultivation vessel with reproducible conditions. The automated measurement of various parameters such as growth, full absorption spectrum, chlorophyll concentration, MALDI-TOF-MS, as well as a novel vitality measurement protocol, have already been established and can be monitored during cultivation. Measurement of growth parameters can be used as inputs for the system to allow for periodic automatic dilutions and therefore a semi-continuous cultivation of hundreds of cultures in parallel. The system also allows the automatic generation of mid and long term backups of cultures to repeat experiments or to retrieve strains of interest. The presented platform allows for high-throughput cultivation and screening of Synechocystis sp. PCC6803. The platform should be usable for many phototrophic microorganisms as is, and be adaptable for even more. A variety of analyses are already established and the platform is easily expandable both in quality, i.e. with further parameters to screen for additional targets and in quantity, i.e. size or number of processed samples.

  7. Flux-lattice melting, anisotropy, and the role of interlayer coupling in Bi-Sr-Ca-Cu-O single crystals

    NASA Astrophysics Data System (ADS)

    Duran, C.; Yazyi, J.; de La Cruz, F.; Bishop, D. J.; Mitzi, D. B.; Kapitulnik, A.

    1991-10-01

    We have used the high-Q mechanical-oscillator technique to probe the vortex-lattice structure in high-quality Bi-Sr-Ca-Cu-O single crystals over a wide range of magnetic fields (200 Oe to 40 kOe), and relative orientations θ between the magnetic field and the crystalline c^ axis. In addition to the large softening and dissipation peak previously observed and interpreted as due to flux-lattice melting, another distinctly different peak at higher temperatures is seen. The temperatures where the dissipation peaks take place are solely defined by the parallel component of the field cosθ, while the restoring force on the oscillator is due to both field components. We suggest that the two peaks are due to the softening of interplanar coupling at the low-temperature peak, and melting or depinning of the two-dimensional pancake vortices at the higher-temperature peak.

  8. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE PAGES

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  9. A Gateway for Phylogenetic Analysis Powered by Grid Computing Featuring GARLI 2.0

    PubMed Central

    Bazinet, Adam L.; Zwickl, Derrick J.; Cummings, Michael P.

    2014-01-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. [garli, gateway, grid computing, maximum likelihood, molecular evolution portal, phylogenetics, web service.] PMID:24789072

  10. A gateway for phylogenetic analysis powered by grid computing featuring GARLI 2.0.

    PubMed

    Bazinet, Adam L; Zwickl, Derrick J; Cummings, Michael P

    2014-09-01

    We introduce molecularevolution.org, a publicly available gateway for high-throughput, maximum-likelihood phylogenetic analysis powered by grid computing. The gateway features a garli 2.0 web service that enables a user to quickly and easily submit thousands of maximum likelihood tree searches or bootstrap searches that are executed in parallel on distributed computing resources. The garli web service allows one to easily specify partitioned substitution models using a graphical interface, and it performs sophisticated post-processing of phylogenetic results. Although the garli web service has been used by the research community for over three years, here we formally announce the availability of the service, describe its capabilities, highlight new features and recent improvements, and provide details about how the grid system efficiently delivers high-quality phylogenetic results. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  11. Note: A rigid piezo motor with large output force and an effective method to reduce sliding friction force

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Ying; Lu, Qingyou, E-mail: qxl@ustc.edu.cn; Hefei National Laboratory for Physical Sciences at Microscale, University of Science and Technology of China, Hefei, Anhui 230026

    2014-05-15

    We present a completely practical TunaDrive piezo motor. It consists of a central piezo stack sandwiched by two arm piezo stacks and two leg piezo stacks, respectively, which is then sandwiched and spring-clamped by a pair of parallel polished sapphire rods. It works by alternatively fast expanding and contracting the arm/leg stacks while slowly expanding/contracting the central stack simultaneously. The key point is that sufficiently fast expanding and contracting a limb stack can make its two sliding friction forces well cancel, resulting in the total sliding friction force is <10% of the total static friction force, which can help increasemore » output force greatly. The piezo motor's high compactness, precision, and output force make it perfect in building a high-quality harsh-condition (vibration resistant) atomic resolution scanning probe microscope.« less

  12. A Distributed Amplifier System for Bilayer Lipid Membrane (BLM) Arrays With Noise and Individual Offset Cancellation.

    PubMed

    Crescentini, Marco; Thei, Frederico; Bennati, Marco; Saha, Shimul; de Planque, Maurits R R; Morgan, Hywel; Tartagni, Marco

    2015-06-01

    Lipid bilayer membrane (BLM) arrays are required for high throughput analysis, for example drug screening or advanced DNA sequencing. Complex microfluidic devices are being developed but these are restricted in terms of array size and structure or have integrated electronic sensing with limited noise performance. We present a compact and scalable multichannel electrophysiology platform based on a hybrid approach that combines integrated state-of-the-art microelectronics with low-cost disposable fluidics providing a platform for high-quality parallel single ion channel recording. Specifically, we have developed a new integrated circuit amplifier based on a novel noise cancellation scheme that eliminates flicker noise derived from devices under test and amplifiers. The system is demonstrated through the simultaneous recording of ion channel activity from eight bilayer membranes. The platform is scalable and could be extended to much larger array sizes, limited only by electronic data decimation and communication capabilities.

  13. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    PubMed Central

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2014-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205

  14. A robotics platform for automated batch fabrication of high density, microfluidics-based DNA microarrays, with applications to single cell, multiplex assays of secreted proteins

    NASA Astrophysics Data System (ADS)

    Ahmad, Habib; Sutherland, Alex; Shin, Young Shik; Hwang, Kiwook; Qin, Lidong; Krom, Russell-John; Heath, James R.

    2011-09-01

    Microfluidics flow-patterning has been utilized for the construction of chip-scale miniaturized DNA and protein barcode arrays. Such arrays have been used for specific clinical and fundamental investigations in which many proteins are assayed from single cells or other small sample sizes. However, flow-patterned arrays are hand-prepared, and so are impractical for broad applications. We describe an integrated robotics/microfluidics platform for the automated preparation of such arrays, and we apply it to the batch fabrication of up to eighteen chips of flow-patterned DNA barcodes. The resulting substrates are comparable in quality with hand-made arrays and exhibit excellent substrate-to-substrate consistency. We demonstrate the utility and reproducibility of robotics-patterned barcodes by utilizing two flow-patterned chips for highly parallel assays of a panel of secreted proteins from single macrophage cells.

  15. A robotics platform for automated batch fabrication of high density, microfluidics-based DNA microarrays, with applications to single cell, multiplex assays of secreted proteins

    PubMed Central

    Ahmad, Habib; Sutherland, Alex; Shin, Young Shik; Hwang, Kiwook; Qin, Lidong; Krom, Russell-John; Heath, James R.

    2011-01-01

    Microfluidics flow-patterning has been utilized for the construction of chip-scale miniaturized DNA and protein barcode arrays. Such arrays have been used for specific clinical and fundamental investigations in which many proteins are assayed from single cells or other small sample sizes. However, flow-patterned arrays are hand-prepared, and so are impractical for broad applications. We describe an integrated robotics/microfluidics platform for the automated preparation of such arrays, and we apply it to the batch fabrication of up to eighteen chips of flow-patterned DNA barcodes. The resulting substrates are comparable in quality with hand-made arrays and exhibit excellent substrate-to-substrate consistency. We demonstrate the utility and reproducibility of robotics-patterned barcodes by utilizing two flow-patterned chips for highly parallel assays of a panel of secreted proteins from single macrophage cells. PMID:21974603

  16. A robotics platform for automated batch fabrication of high density, microfluidics-based DNA microarrays, with applications to single cell, multiplex assays of secreted proteins.

    PubMed

    Ahmad, Habib; Sutherland, Alex; Shin, Young Shik; Hwang, Kiwook; Qin, Lidong; Krom, Russell-John; Heath, James R

    2011-09-01

    Microfluidics flow-patterning has been utilized for the construction of chip-scale miniaturized DNA and protein barcode arrays. Such arrays have been used for specific clinical and fundamental investigations in which many proteins are assayed from single cells or other small sample sizes. However, flow-patterned arrays are hand-prepared, and so are impractical for broad applications. We describe an integrated robotics/microfluidics platform for the automated preparation of such arrays, and we apply it to the batch fabrication of up to eighteen chips of flow-patterned DNA barcodes. The resulting substrates are comparable in quality with hand-made arrays and exhibit excellent substrate-to-substrate consistency. We demonstrate the utility and reproducibility of robotics-patterned barcodes by utilizing two flow-patterned chips for highly parallel assays of a panel of secreted proteins from single macrophage cells. © 2011 American Institute of Physics

  17. FY96-98 Summary Report Mercury: Next Generation Laser for High Energy Density Physics SI-014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayramian, A.; Beach, R.; Bibeau, C.

    The scope of the Mercury Laser project encompasses the research, development, and engineering required to build a new generation of diode-pumped solid-state lasers for Inertial Confinement Fusion (ICF). The Mercury Laser will be the first integrated demonstration of laser diodes, crystals, and gas cooling within a scalable laser architecture. This report is intended to summarize the progress accomplished during the first three years of the project. Due to the technological challenges associated with production of 900 nm diode-bars, heatsinks, and high optical-quality Yb:S-FAP crystals, the initial focus of the project was primarily centered on the R&D in these three areas.more » During the third year of the project, the R&D continued in parallel with the development of computer codes, partial activation of the laser, component testing, and code validation where appropriate.« less

  18. Quality of antenatal care and client satisfaction in Kenya and Namibia.

    PubMed

    Do, Mai; Wang, Wenjuan; Hembling, John; Ametepi, Paul

    2017-04-01

    Despite much progress in maternal health service coverage, the quality of care has not seen parallel improvement. This study assessed the quality of antenatal care (ANC), an entry point to the health system for many women. The study used data from recent Service Provision Assessment (SPA) surveys of nationally representative health facilities in Kenya and Namibia. Kenya and Namibia represent the situation in much of sub-Saharan Africa, where ANC is relatively common but maternal mortality remains high. The SPA comprised an inventory of health facilities that provided ANC, interviews with ANC providers and clients, and observations of service delivery. Not applicable. Quality was measured in terms of structure and process of service provision, and client satisfaction as the outcome of service provision. Wide variations in structural and process attributes of quality of care existed in both Kenya and Namibia; however, better structural quality did not translate to better service delivery process or greater client satisfaction. Long waiting time was a common problem and was generally more serious in hospitals and health centers than in clinics and smaller facilities; it was consistently associated with lower client satisfaction. The study also indicates that the provider's technical preparedness may not be sufficient to provide good-quality services and to ensure client satisfaction. Findings highlight important program implications, including improving ANC services and promoting their use at health clinics and lower-level facilities, and ensuring that available supplies and equipment are used for service provision. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  19. Rapid evaluation and quality control of next generation sequencing data with FaQCs

    DOE PAGES

    Lo, Chien -Chi; Chain, Patrick S. G.

    2014-12-01

    Background: Next generation sequencing (NGS) technologies that parallelize the sequencing process and produce thousands to millions, or even hundreds of millions of sequences in a single sequencing run, have revolutionized genomic and genetic research. Because of the vagaries of any platform's sequencing chemistry, the experimental processing, machine failure, and so on, the quality of sequencing reads is never perfect, and often declines as the read is extended. These errors invariably affect downstream analysis/application and should therefore be identified early on to mitigate any unforeseen effects. Results: Here we present a novel FastQ Quality Control Software (FaQCs) that can rapidly processmore » large volumes of data, and which improves upon previous solutions to monitor the quality and remove poor quality data from sequencing runs. Both the speed of processing and the memory footprint of storing all required information have been optimized via algorithmic and parallel processing solutions. The trimmed output compared side-by-side with the original data is part of the automated PDF output. We show how this tool can help data analysis by providing a few examples, including an increased percentage of reads recruited to references, improved single nucleotide polymorphism identification as well as de novo sequence assembly metrics. Conclusion: FaQCs combines several features of currently available applications into a single, user-friendly process, and includes additional unique capabilities such as filtering the PhiX control sequences, conversion of FASTQ formats, and multi-threading. The original data and trimmed summaries are reported within a variety of graphics and reports, providing a simple way to do data quality control and assurance.« less

  20. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

Top