Parallel-vector unsymmetric Eigen-Solver on high performance computers
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Jiangning, Qin
1993-01-01
The popular QR algorithm for solving all eigenvalues of an unsymmetric matrix is reviewed. Among the basic components in the QR algorithm, it was concluded from this study, that the reduction of an unsymmetric matrix to a Hessenberg form (before applying the QR algorithm itself) can be done effectively by exploiting the vector speed and multiple processors offered by modern high-performance computers. Numerical examples of several test cases have indicated that the proposed parallel-vector algorithm for converting a given unsymmetric matrix to a Hessenberg form offers computational advantages over the existing algorithm. The time saving obtained by the proposed methods is increased as the problem size increased.
Reconstructing householder vectors from Tall-Skinny QR
Ballard, Grey Malone; Demmel, James; Grigori, Laura; ...
2015-08-05
The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstratemore » the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. Furthermore, we also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance.« less
Performance of low-rank QR approximation of the finite element Biot-Savart law
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D; Fasenfest, B
2006-10-16
In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law. Our goal is to develop an algorithm that is easily implemented on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representingmore » distant interactions being low rank and having a compressed QR representation. While an octree partitioning of the matrix may be ideal, for ease of parallel implementation we employ a partitioning based on number of processors. The rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less
A communication-avoiding, hybrid-parallel, rank-revealing orthogonalization method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoemmen, Mark
2010-11-01
Orthogonalization consumes much of the run time of many iterative methods for solving sparse linear systems and eigenvalue problems. Commonly used algorithms, such as variants of Gram-Schmidt or Householder QR, have performance dominated by communication. Here, 'communication' includes both data movement between the CPU and memory, and messages between processors in parallel. Our Tall Skinny QR (TSQR) family of algorithms requires asymptotically fewer messages between processors and data movement between CPU and memory than typical orthogonalization methods, yet achieves the same accuracy as Householder QR factorization. Furthermore, in block orthogonalizations, TSQR is faster and more accurate than existing approaches formore » orthogonalizing the vectors within each block ('normalization'). TSQR's rank-revealing capability also makes it useful for detecting deflation in block iterative methods, for which existing approaches sacrifice performance, accuracy, or both. We have implemented a version of TSQR that exploits both distributed-memory and shared-memory parallelism, and supports real and complex arithmetic. Our implementation is optimized for the case of orthogonalizing a small number (5-20) of very long vectors. The shared-memory parallel component uses Intel's Threading Building Blocks, though its modular design supports other shared-memory programming models as well, including computation on the GPU. Our implementation achieves speedups of 2 times or more over competing orthogonalizations. It is available now in the development branch of the Trilinos software package, and will be included in the 10.8 release.« less
A fast new algorithm for a robot neurocontroller using inverse QR decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, A.S.; Khemaissia, S.
2000-01-01
A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
Computing row and column counts for sparse QR and LU factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, John R.; Li, Xiaoye S.; Ng, Esmond G.
2001-01-01
We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization. The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and theremore » is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization. These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.« less
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
QR-decomposition based SENSE reconstruction using parallel architecture.
Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad
2018-04-01
Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Dongarra, Jack (Editor); Messina, Paul (Editor); Sorensen, Danny C. (Editor); Voigt, Robert G. (Editor)
1990-01-01
Attention is given to such topics as an evaluation of block algorithm variants in LAPACK and presents a large-grain parallel sparse system solver, a multiprocessor method for the solution of the generalized Eigenvalue problem on an interval, and a parallel QR algorithm for iterative subspace methods on the CM2. A discussion of numerical methods includes the topics of asynchronous numerical solutions of PDEs on parallel computers, parallel homotopy curve tracking on a hypercube, and solving Navier-Stokes equations on the Cedar Multi-Cluster system. A section on differential equations includes a discussion of a six-color procedure for the parallel solution of elliptic systems using the finite quadtree structure, data parallel algorithms for the finite element method, and domain decomposition methods in aerodynamics. Topics dealing with massively parallel computing include hypercube vs. 2-dimensional meshes and massively parallel computation of conservation laws. Performance and tools are also discussed.
A parallel computer implementation of fast low-rank QR approximation of the Biot-Savart law
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D A; Fasenfest, B J; Stowell, M L
2005-11-07
In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representing distant interactions being low rank and having a compressed QR representation.more » The matrix partitioning is determined by the number of processors, the rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less
A divide and conquer approach to the nonsymmetric eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessup, E.R.
1991-01-01
Serial computation combined with high communication costs on distributed-memory multiprocessors make parallel implementations of the QR method for the nonsymmetric eigenvalue problem inefficient. This paper introduces an alternative algorithm for the nonsymmetric tridiagonal eigenvalue problem based on rank two tearing and updating of the matrix. The parallelism of this divide and conquer approach stems from independent solution of the updating problems. 11 refs.
Acoustooptic linear algebra processors - Architectures, algorithms, and applications
NASA Technical Reports Server (NTRS)
Casasent, D.
1984-01-01
Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.
Security authentication using phase-encoded nanoparticle structures and polarized light.
Carnicer, Artur; Hassanfiroozi, Amir; Latorre-Carmona, Pedro; Huang, Yi-Pai; Javidi, Bahram
2015-01-15
Phase-encoded nanostructures such as quick response (QR) codes made of metallic nanoparticles are suggested to be used in security and authentication applications. We present a polarimetric optical method able to authenticate random phase-encoded QR codes. The system is illuminated using polarized light, and the QR code is encoded using a phase-only random mask. Using classification algorithms, it is possible to validate the QR code from the examination of the polarimetric signature of the speckle pattern. We used Kolmogorov-Smirnov statistical test and Support Vector Machine algorithms to authenticate the phase-encoded QR codes using polarimetric signatures.
Structure-preserving and rank-revealing QR-factorizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C.H.; Hansen, P.C.
1991-11-01
The rank-revealing QR-factorization (RRQR-factorization) is a special QR-factorization that is guaranteed to reveal the numerical rank of the matrix under consideration. This makes the RRQR-factorization a useful tool in the numerical treatment of many rank-deficient problems in numerical linear algebra. In this paper, a framework is presented for the efficient implementation of RRQR algorithms, in particular, for sparse matrices. A sparse RRQR-algorithm should seek to preserve the structure and sparsity of the matrix as much as possible while retaining the ability to capture safely the numerical rank. To this end, the paper proposes to compute an initial QR-factorization using amore » restricted pivoting strategy guarded by incremental condition estimation (ICE), and then applies the algorithm suggested by Chan and Foster to this QR-factorization. The column exchange strategy used in the initial QR factorization will exploit the fact that certain column exchanges do not change the sparsity structure, and compute a sparse QR-factorization that is a good approximation of the sought-after RRQR-factorization. Due to quantities produced by ICE, the Chan/Foster RRQR algorithm can be implemented very cheaply, thus verifying that the sought-after RRQR-factorization has indeed been computed. Experimental results on a model problem show that the initial QR-factorization is indeed very likely to produce RRQR-factorization.« less
NASA Astrophysics Data System (ADS)
Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi
2014-12-01
Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.
A QR code identification technology in package auto-sorting system
NASA Astrophysics Data System (ADS)
di, Yi-Juan; Shi, Jian-Ping; Mao, Guo-Yong
2017-07-01
Traditional manual sorting operation is not suitable for the development of Chinese logistics. For better sorting packages, a QR code recognition technology is proposed to identify the QR code label on the packages in package auto-sorting system. The experimental results compared with other algorithms in literatures demonstrate that the proposed method is valid and its performance is superior to other algorithms.
Efficient algorithms for computing a strong rank-revealing QR factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, M.; Eisenstat, S.C.
1996-07-01
Given an m x n matrix M with m {ge} n, it is shown that there exists a permutation {Pi} and an integer k such that the QR factorization given by equation (1) reveals the numerical rank of M: the k x k upper-triangular matrix A{sub k} is well conditioned, norm of (C{sub k}){sub 2} is small, and B{sub k} is linearly dependent on A{sub k} with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficientmore » as QR with column pivoting for most problems and take O(mn{sup 2}) floating-point operations in the worst case.« less
Functionalization of quantum rods with oligonucleotides for programmable assembly with DNA origami
NASA Astrophysics Data System (ADS)
Doane, Tennyson L.; Alam, Rabeka; Maye, Mathew M.
2015-02-01
The DNA-mediated self-assembly of CdSe/CdS quantum rods (QRs) onto DNA origami is described. Two QR types with unique optical emission and high polarization were synthesized, and then functionalized with oligonucleotides (ssDNA) using a novel protection-deprotection approach, which harnessed ssDNA's tailorable rigidity and denaturation temperature to increase DNA coverage by reducing non-specific coordination and wrapping. The QR assembly was programmable, and occurred at two different assembly zones that had capture strands in parallel alignment. QRs with different optical properties were assembled, opening up future studies on orientation dependent QR FRET. The QR-origami conjugates could be purified via gel electrophoresis and sucrose gradient ultracentrifugation. Assembly yields, QR stoichiometry and orientation, as well as energy transfer implications were studied in light of QR distances, origami flexibility, and conditions.The DNA-mediated self-assembly of CdSe/CdS quantum rods (QRs) onto DNA origami is described. Two QR types with unique optical emission and high polarization were synthesized, and then functionalized with oligonucleotides (ssDNA) using a novel protection-deprotection approach, which harnessed ssDNA's tailorable rigidity and denaturation temperature to increase DNA coverage by reducing non-specific coordination and wrapping. The QR assembly was programmable, and occurred at two different assembly zones that had capture strands in parallel alignment. QRs with different optical properties were assembled, opening up future studies on orientation dependent QR FRET. The QR-origami conjugates could be purified via gel electrophoresis and sucrose gradient ultracentrifugation. Assembly yields, QR stoichiometry and orientation, as well as energy transfer implications were studied in light of QR distances, origami flexibility, and conditions. Electronic supplementary information (ESI) available: Experimental conditions, DNA origami blueprint and sequences, FRET calculations. Additional Fig. S1-S13. See DOI: 10.1039/c4nr07662a
Computing rank-revealing QR factorizations of dense matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science
1998-06-01
We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliablemore » algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS/6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15% of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.« less
Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.
2015-03-01
In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.
NASA Astrophysics Data System (ADS)
Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li
2014-09-01
This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.
Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao
2016-05-19
Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.
Research on pre-processing of QR Code
NASA Astrophysics Data System (ADS)
Sun, Haixing; Xia, Haojie; Dong, Ning
2013-10-01
QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luszczek, Piotr R; Tomov, Stanimire Z; Dongarra, Jack J
We present an efficient and scalable programming model for the development of linear algebra in heterogeneous multi-coprocessor environments. The model incorporates some of the current best design and implementation practices for the heterogeneous acceleration of dense linear algebra (DLA). Examples are given as the basis for solving linear systems' algorithms - the LU, QR, and Cholesky factorizations. To generate the extreme level of parallelism needed for the efficient use of coprocessors, algorithms of interest are redesigned and then split into well-chosen computational tasks. The tasks execution is scheduled over the computational components of a hybrid system of multi-core CPUs andmore » coprocessors using a light-weight runtime system. The use of lightweight runtime systems keeps scheduling overhead low, while enabling the expression of parallelism through otherwise sequential code. This simplifies the development efforts and allows the exploration of the unique strengths of the various hardware components.« less
On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hsieh, Shih-Fu
1990-01-01
In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application.
Automatic Blocking Of QR and LU Factorizations for Locality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Q; Kennedy, K; You, H
2004-03-26
QR and LU factorizations for dense matrices are important linear algebra computations that are widely used in scientific applications. To efficiently perform these computations on modern computers, the factorization algorithms need to be blocked when operating on large matrices to effectively exploit the deep cache hierarchy prevalent in today's computer memory systems. Because both QR (based on Householder transformations) and LU factorization algorithms contain complex loop structures, few compilers can fully automate the blocking of these algorithms. Though linear algebra libraries such as LAPACK provides manually blocked implementations of these algorithms, by automatically generating blocked versions of the computations, moremore » benefit can be gained such as automatic adaptation of different blocking strategies. This paper demonstrates how to apply an aggressive loop transformation technique, dependence hoisting, to produce efficient blockings for both QR and LU with partial pivoting. We present different blocking strategies that can be generated by our optimizer and compare the performance of auto-blocked versions with manually tuned versions in LAPACK, both using reference BLAS, ATLAS BLAS and native BLAS specially tuned for the underlying machine architectures.« less
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
Zeb, Salman; Yousaf, Muhammad
2017-01-01
In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.
Algorithm 782 : codes for rank-revealing QR factorizations of dense matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bischof, C. H.; Quintana-Orti, G.; Mathematics and Computer Science
1998-06-01
This article describes a suite of codes as well as associated testing and timing drivers for computing rank-revealing QR (RRQR) factorizations of dense matrices. The main contribution is an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy and improved versions of the RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang, respectively, We highlight usage and features of these codes.
Information retrieval based on single-pixel optical imaging with quick-response code
NASA Astrophysics Data System (ADS)
Xiao, Yin; Chen, Wen
2018-04-01
Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.
The Fat-like Cadherin CDH-4 Acts Cell-Non-Autonomously in Anterior-Posterior Neuroblast Migration
Sundararajan, Lakshmi; Norris, Megan L.; Schöneich, Sebastian; Ackley, Brian D.; Lundquist, Erik A.
2014-01-01
Directed migration of neurons is critical in the normal and pathological development of the brain and central nervous system. In C. elegans, the bilateral Q neuroblasts, QR on the right and QL on the left, migrate anteriorly and posteriorly, respectively. Initial protrusion and migration of the Q neuroblasts is autonomously controlled by the transmembrane proteins UNC-40/DCC, PTP-3/LAR, and MIG-21. As QL migrates posteriorly, it encounters and EGL-20/Wnt signal that induces MAB-5/Hox expression that drives QL descendant posterior migration. QR migrates anteriorly away from EGL-20/Wnt and does not activate MAB-5/Hox, resulting in anterior QR descendant migration. A forward genetic screen for new mutations affecting initial Q migrations identified alleles of cdh-4, which caused defects in both QL and QR directional migration similar to unc-40, ptp-3, and mig-21. Previous studies showed that in QL, PTP-3/LAR and MIG-21 act in a pathway in parallel to UNC-40/DCC to drive posterior QL migration. Here we show genetic evidence that CDH-4 acts in the PTP-3/MIG-21 pathway in parallel to UNC-40/DCC to direct posterior QL migration. In QR, the PTP-3/MIG-21 and UNC-40/DCC pathways mutually inhibit each other, allowing anterior QR migration. We report here that CDH-4 acts in both the PTP-3/MIG-21 and UNC-40/DCC pathways in mutual inhibition in QR, and that CDH-4 acts cell-non-autonomously. Interaction of CDH-4 with UNC-40/DCC in QR but not QL represents an inherent left-right asymmetry in the Q cells, the nature of which is not understood. We conclude that CDH-4 might act as a permissive signal for each Q neuroblast to respond differently to anterior-posterior guidance information based upon inherent left-right asymmetries in the Q neuroblasts. PMID:24954154
NASA Technical Reports Server (NTRS)
Liu, Kuojuey Ray
1990-01-01
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Using QR codes to enable quick access to information in acute cancer care.
Upton, Joanne; Olsson-Brown, Anna; Marshall, Ernie; Sacco, Joseph
2017-05-25
Quick access to toxicity management information ensures timely access to steroids/immunosuppressive treatment for cancer patients experiencing immune-related adverse events, thus reducing length of hospital stays or avoiding hospital admission entirely. This article discusses a project to add a QR (quick response) code to a patient-held immunotherapy alert card. As QR code generation is free and the immunotherapy clinical management algorithms were already publicly available through the trust's clinical network website, the costs of integrating a QR code into the alert card, after printing, were low, while the potential benefits are numerous. Patient-held alert cards are widely used for patients receiving anti-cancer treatment, and this established standard of care has been modified to enable rapid access of information through the incorporation of a QR code.
Performance and Accuracy of LAPACK's Symmetric TridiagonalEigensolvers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demmel, Jim W.; Marques, Osni A.; Parlett, Beresford N.
2007-04-19
We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the Divide-and-Conquer method (DC), and the method of Multiple Relatively Robust Representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs, and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today's computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewestmore » floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O({radical}ne). The accuracy of BI and MR is generally O(ne). (5) MR is preferable to BI for subset computations.« less
Sundararajan, Lakshmi; Norris, Megan L; Lundquist, Erik A
2015-05-28
The Q neuroblasts in Caenorhabditis elegans display left-right asymmetry in their migration, with QR and descendants on the right migrating anteriorly, and QL and descendants on the left migrating posteriorly. Initial QR and QL migration is controlled by the transmembrane receptors UNC-40/DCC, PTP-3/LAR, and the Fat-like cadherin CDH-4. After initial migration, QL responds to an EGL-20/Wnt signal that drives continued posterior migration by activating MAB-5/Hox activity in QL but not QR. QR expresses the transmembrane protein MIG-13, which is repressed by MAB-5 in QL and which drives anterior migration of QR descendants. A screen for new Q descendant AQR and PQR migration mutations identified mig-13 as well as hse-5, the gene encoding the glucuronyl C5-epimerase enzyme, which catalyzes epimerization of glucuronic acid to iduronic acid in the heparan sulfate side chains of heparan sulfate proteoglycans (HSPGs). Of five C. elegans HSPGs, we found that only SDN-1/Syndecan affected Q migrations. sdn-1 mutants showed QR descendant AQR anterior migration defects, and weaker QL descendant PQR migration defects. hse-5 affected initial Q migration, whereas sdn-1 did not. sdn-1 and hse-5 acted redundantly in AQR and PQR migration, but not initial Q migration, suggesting the involvement of other HSPGs in Q migration. Cell-specific expression studies indicated that SDN-1 can act in QR to promote anterior migration. Genetic interactions between sdn-1, mig-13, and mab-5 suggest that MIG-13 and SDN-1 act in parallel to promote anterior AQR migration and that SDN-1 also controls posterior migration. Together, our results indicate previously unappreciated complexity in the role of multiple signaling pathways and inherent left-right asymmetry in the control of Q neuroblast descendant migration. Copyright © 2015 Sundararajan et al.
Sundararajan, Lakshmi; Norris, Megan L.; Lundquist, Erik A.
2015-01-01
The Q neuroblasts in Caenorhabditis elegans display left-right asymmetry in their migration, with QR and descendants on the right migrating anteriorly, and QL and descendants on the left migrating posteriorly. Initial QR and QL migration is controlled by the transmembrane receptors UNC-40/DCC, PTP-3/LAR, and the Fat-like cadherin CDH-4. After initial migration, QL responds to an EGL-20/Wnt signal that drives continued posterior migration by activating MAB-5/Hox activity in QL but not QR. QR expresses the transmembrane protein MIG-13, which is repressed by MAB-5 in QL and which drives anterior migration of QR descendants. A screen for new Q descendant AQR and PQR migration mutations identified mig-13 as well as hse-5, the gene encoding the glucuronyl C5-epimerase enzyme, which catalyzes epimerization of glucuronic acid to iduronic acid in the heparan sulfate side chains of heparan sulfate proteoglycans (HSPGs). Of five C. elegans HSPGs, we found that only SDN-1/Syndecan affected Q migrations. sdn-1 mutants showed QR descendant AQR anterior migration defects, and weaker QL descendant PQR migration defects. hse-5 affected initial Q migration, whereas sdn-1 did not. sdn-1 and hse-5 acted redundantly in AQR and PQR migration, but not initial Q migration, suggesting the involvement of other HSPGs in Q migration. Cell-specific expression studies indicated that SDN-1 can act in QR to promote anterior migration. Genetic interactions between sdn-1, mig-13, and mab-5 suggest that MIG-13 and SDN-1 act in parallel to promote anterior AQR migration and that SDN-1 also controls posterior migration. Together, our results indicate previously unappreciated complexity in the role of multiple signaling pathways and inherent left-right asymmetry in the control of Q neuroblast descendant migration. PMID:26022293
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen
2016-09-01
In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.
STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations
NASA Technical Reports Server (NTRS)
Shah, S. N.
1981-01-01
The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.
Jiao, Shuming; Jin, Zhi; Zhou, Changyuan; Zou, Wenbin; Li, Xia
2018-01-01
Quick response (QR) code has been employed as a data carrier for optical cryptosystems in many recent research works, and the error-correction coding mechanism allows the decrypted result to be noise free. However, in this paper, we point out for the first time that the Reed-Solomon coding algorithm in QR code is not a very suitable option for the nonlocally distributed speckle noise in optical cryptosystems from an information coding perspective. The average channel capacity is proposed to measure the data storage capacity and noise-resistant capability of different encoding schemes. We design an alternative 2D barcode scheme based on Bose-Chaudhuri-Hocquenghem (BCH) coding, which demonstrates substantially better average channel capacity than QR code in numerical simulated optical cryptosystems.
Wang, Xiaogang; Chen, Wen; Chen, Xudong
2015-03-09
In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.
NASA Astrophysics Data System (ADS)
Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong
2015-10-01
We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.
Quasi-radial wall jets as a new concept in boundary layer flow control
NASA Astrophysics Data System (ADS)
Javadi, Khodayar; Hajipour, Majid
2018-01-01
This work aims to introduce a novel concept of wall jets wherein the flow is radially injected into a medium through a sector of a cylinder, called quasi-radial (QR) wall jets. The results revealed that fluid dynamics of the QR wall jet flow differs from that of conventional wall jets. Indeed, lateral and normal propagations of a conventional three-dimensional wall jet are via shear stresses. While, lateral propagation of a QR wall jet is due to mean lateral component of the velocity field. Moreover, discharged Arrays of conventional three-dimensional wall jets in quiescent air lead to formation of a combined wall jet at large distant from the nozzles, while QR wall jet immediately spread in lateral direction, meet each other and merge together very quickly in a short distance downstream of the jet nozzles. Furthermore, in discharging the conventional jets into an external flow, there is no strong interaction between them as they are moving parallel. While, in QR wall jets the lateral components of the velocity field strongly interact with boundary layer of the external flow and create strong helical vortices acting as vortex generators.
An efficient solution of real-time data processing for multi-GNSS network
NASA Astrophysics Data System (ADS)
Gong, Xiaopeng; Gu, Shengfeng; Lou, Yidong; Zheng, Fu; Ge, Maorong; Liu, Jingnan
2017-12-01
Global navigation satellite systems (GNSS) are acting as an indispensable tool for geodetic research and global monitoring of the Earth, and they have been rapidly developed over the past few years with abundant GNSS networks, modern constellations, and significant improvement in mathematic models of data processing. However, due to the increasing number of satellites and stations, the computational efficiency becomes a key issue and it could hamper the further development of GNSS applications. In this contribution, this problem is overcome from the aspects of both dense linear algebra algorithms and GNSS processing strategy. First, in order to fully explore the power of modern microprocessors, the square root information filter solution based on the blocked QR factorization employing as many matrix-matrix operations as possible is introduced. In addition, the algorithm complexity of GNSS data processing is further decreased by centralizing the carrier-phase observations and ambiguity parameters, as well as performing the real-time ambiguity resolution and elimination. Based on the QR factorization of the simulated matrix, we can conclude that compared to unblocked QR factorization, the blocked QR factorization can greatly improve processing efficiency with a magnitude of nearly two orders on a personal computer with four 3.30 GHz cores. Then, with 82 globally distributed stations, the processing efficiency is further validated in multi-GNSS (GPS/BDS/Galileo) satellite clock estimation. The results suggest that it will take about 31.38 s per epoch for the unblocked method. While, without any loss of accuracy, it only takes 0.50 and 0.31 s for our new algorithm per epoch for float and fixed clock solutions, respectively.
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.
Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao
2017-07-01
In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.
Optically secured information retrieval using two authenticated phase-only masks.
Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong
2015-10-23
We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices.
Optically secured information retrieval using two authenticated phase-only masks
Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong
2015-01-01
We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices. PMID:26494213
Optically secured information retrieval using two authenticated phase-only masks
NASA Astrophysics Data System (ADS)
Wang, Xiaogang; Chen, Wen; Mei, Shengtao; Chen, Xudong
2015-10-01
We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices.
Optical identity authentication technique based on compressive ghost imaging with QR code
NASA Astrophysics Data System (ADS)
Wenjie, Zhan; Leihong, Zhang; Xi, Zeng; Yi, Kang
2018-04-01
With the rapid development of computer technology, information security has attracted more and more attention. It is not only related to the information and property security of individuals and enterprises, but also to the security and social stability of a country. Identity authentication is the first line of defense in information security. In authentication systems, response time and security are the most important factors. An optical authentication technology based on compressive ghost imaging with QR codes is proposed in this paper. The scheme can be authenticated with a small number of samples. Therefore, the response time of the algorithm is short. At the same time, the algorithm can resist certain noise attacks, so it offers good security.
Accurate Singular Values and Differential QD Algorithms
1992-07-01
of the Cholesky Algorithm 5 4 The Quotient Difference Algorithm 8 5 Incorporation of Shifts 11 5.1 Shifted qd Algorithms...Effects of Finite Precision 18 7.1 Error Analysis - Overview ........ ........................... 18 7.2 High Relative Accuracy in the Presence of...showing that it was preferable to replace the DK zero-shift QR transform by two steps of zero-shift LR implemented in a qd (quotient- difference ) format
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Tingzing Tim; Tomov, Stanimire Z; Luszczek, Piotr R
As modern hardware keeps evolving, an increasingly effective approach to developing energy efficient and high-performance solvers is to design them to work on many small size and independent problems. Many applications already need this functionality, especially for GPUs, which are currently known to be about four to five times more energy efficient than multicore CPUs. We describe the development of one-sided factorizations that work for a set of small dense matrices in parallel, and we illustrate our techniques on the QR factorization based on Householder transformations. We refer to this mode of operation as a batched factorization. Our approach ismore » based on representing the algorithms as a sequence of batched BLAS routines for GPU-only execution. This is in contrast to the hybrid CPU-GPU algorithms that rely heavily on using the multicore CPU for specific parts of the workload. But for a system to benefit fully from the GPU's significantly higher energy efficiency, avoiding the use of the multicore CPU must be a primary design goal, so the system can rely more heavily on the more efficient GPU. Additionally, this will result in the removal of the costly CPU-to-GPU communication. Furthermore, we do not use a single symmetric multiprocessor(on the GPU) to factorize a single problem at a time. We illustrate how our performance analysis, and the use of profiling and tracing tools, guided the development and optimization of our batched factorization to achieve up to a 2-fold speedup and a 3-fold energy efficiency improvement compared to our highly optimized batched CPU implementations based on the MKL library(when using two sockets of Intel Sandy Bridge CPUs). Compared to a batched QR factorization featured in the CUBLAS library for GPUs, we achieved up to 5x speedup on the K40 GPU.« less
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.; Shie, Chung-Lin; L'Ecuyer, Tristan S.; Tao, Wei-Kuo
2009-01-01
In this study, satellite passive microwave sensor observations from the TRMM Microwave Imager (TMI) are utilized to make estimates of latent + eddy sensible heating rates (Q1-QR) in regions of precipitation. The TMI heating algorithm (TRAIN) is calibrated, or "trained" using relatively accurate estimates of heating based upon spaceborne Precipitation Radar (PR) observations collocated with the TMI observations over a one-month period. The heating estimation technique is based upon a previously described Bayesian methodology, but with improvements in supporting cloud-resolving model simulations, an adjustment of precipitation echo tops to compensate for model biases, and a separate scaling of convective and stratiform heating components that leads to an approximate balance between estimated vertically-integrated condensation and surface precipitation. Estimates of Q1-QR from TMI compare favorably with the PR training estimates and show only modest sensitivity to the cloud-resolving model simulations of heating used to construct the training data. Moreover, the net condensation in the corresponding annual mean satellite latent heating profile is within a few percent of the annual mean surface precipitation rate over the tropical and subtropical oceans where the algorithm is applied. Comparisons of Q1 produced by combining TMI Q1-QR with independently derived estimates of QR show reasonable agreement with rawinsonde-based analyses of Q1 from two field campaigns, although the satellite estimates exhibit heating profile structure with sharper and more intense heating peaks than the rawinsonde estimates. 2
An extension of the QZ algorithm for solving the generalized matrix eigenvalue problem
NASA Technical Reports Server (NTRS)
Ward, R. C.
1973-01-01
This algorithm is an extension of Moler and Stewart's QZ algorithm with some added features for saving time and operations. Also, some additional properties of the QR algorithm which were not practical to implement in the QZ algorithm can be generalized with the combination shift QZ algorithm. Numerous test cases are presented to give practical application tests for algorithm. Based on results, this algorithm should be preferred over existing algorithms which attempt to solve the class of generalized eigenproblems where both matrices are singular or nearly singular.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avron, Haim; Ng, Esmond G.; Toledo, Sivan
2008-03-21
We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we canmore » drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.« less
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.
2016-11-01
Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The most popular are coherent techniques such as double random phase encoding. Its main advantage is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. Recently, encryption of digital information in form of binary images has become quite popular. Advantages of using quick response (QR) code in capacity of data container for optical encryption include: 1) any data represented as QR code will have close to white (excluding zero spatial frequency) Fourier spectrum which have good overlapping with encryption key spectrum; 2) built-in algorithm for image scale and orientation correction which simplifies decoding of decrypted QR codes; 3) embedded error correction code allows for successful decryption of information even in case of partial corruption of decrypted image. Optical encryption of digital data in form QR codes using spatially incoherent illumination was experimentally implemented. Two liquid crystal spatial light modulators were used in experimental setup for QR code and encrypting kinoform imaging respectively. Decryption was conducted digitally. Successful decryption of encrypted QR codes is demonstrated.
Robust fitting for neuroreceptor mapping.
Chang, Chung; Ogden, R Todd
2009-03-15
Among many other uses, positron emission tomography (PET) can be used in studies to estimate the density of a neuroreceptor at each location throughout the brain by measuring the concentration of a radiotracer over time and modeling its kinetics. There are a variety of kinetic models in common usage and these typically rely on nonlinear least-squares (LS) algorithms for parameter estimation. However, PET data often contain artifacts (such as uncorrected head motion) and so the assumptions on which the LS methods are based may be violated. Quantile regression (QR) provides a robust alternative to LS methods and has been used successfully in many applications. We consider fitting various kinetic models to PET data using QR and study the relative performance of the methods via simulation. A data adaptive method for choosing between LS and QR is proposed and the performance of this method is also studied.
Algorithm For Solution Of Subset-Regression Problems
NASA Technical Reports Server (NTRS)
Verhaegen, Michel
1991-01-01
Reliable and flexible algorithm for solution of subset-regression problem performs QR decomposition with new column-pivoting strategy, enables selection of subset directly from originally defined regression parameters. This feature, in combination with number of extensions, makes algorithm very flexible for use in analysis of subset-regression problems in which parameters have physical meanings. Also extended to enable joint processing of columns contaminated by noise with those free of noise, without using scaling techniques.
Cai, Yong; Li, Xiwen; Wang, Runmiao; Yang, Qing; Li, Peng; Hu, Hao
2016-01-01
Currently, the chemical fingerprint comparison and analysis is mainly based on professional equipment and software, it's expensive and inconvenient. This study aims to integrate QR (Quick Response) code with quality data and mobile intelligent technology to develop a convenient query terminal for tracing quality in the whole industrial chain of TCM (traditional Chinese medicine). Three herbal medicines were randomly selected and their chemical two-dimensional barcode (2D) barcodes fingerprints were constructed. Smartphone application (APP) based on Android system was developed to read initial data of 2D chemical barcodes, and compared multiple fingerprints from different batches of same species or different species. It was demonstrated that there were no significant differences between original and scanned TCM chemical fingerprints. Meanwhile, different TCM chemical fingerprint QR codes could be rendered in the same coordinate and showed the differences very intuitively. To be able to distinguish the variations of chemical fingerprint more directly, linear interpolation angle cosine similarity algorithm (LIACSA) was proposed to get similarity ratio. This study showed that QR codes can be used as an effective information carrier to transfer quality data. Smartphone application can rapidly read quality information in QR codes and convert data into TCM chemical fingerprints.
Adaptive Identification by Systolic Arrays.
1987-12-01
BIBLIOGRIAPHY Anton , Howard, Elementary Linear Algebra , John Wiley & Sons, 19S4. Cristi, Roberto, A Parallel Structure Jor Adaptive Pole Placement...10 11. SYSTEM IDENTIFICATION M*YETHODS ....................... 12 A. LINEAR SYSTEM MODELING ......................... 12 B. SOLUTION OF SYSTEMS OF... LINEAR EQUATIONS ......... 13 C. QR DECOMPOSITION ................................ 14 D. RECURSIVE LEAST SQUARES ......................... 16 E. BLOCK
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
Song, Fengguang; Dongarra, Jack
2014-10-01
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
A scalable approach to solving dense linear algebra problems on hybrid CPU-GPU systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Fengguang; Dongarra, Jack
Aiming to fully exploit the computing power of all CPUs and all graphics processing units (GPUs) on hybrid CPU-GPU systems to solve dense linear algebra problems, in this paper we design a class of heterogeneous tile algorithms to maximize the degree of parallelism, to minimize the communication volume, and to accommodate the heterogeneity between CPUs and GPUs. The new heterogeneous tile algorithms are executed upon our decentralized dynamic scheduling runtime system, which schedules a task graph dynamically and transfers data between compute nodes automatically. The runtime system uses a new distributed task assignment protocol to solve data dependencies between tasksmore » without any coordination between processing units. By overlapping computation and communication through dynamic scheduling, we are able to attain scalable performance for the double-precision Cholesky factorization and QR factorization. Finally, our approach demonstrates a performance comparable to Intel MKL on shared-memory multicore systems and better performance than both vendor (e.g., Intel MKL) and open source libraries (e.g., StarPU) in the following three environments: heterogeneous clusters with GPUs, conventional clusters without GPUs, and shared-memory systems with multiple GPUs.« less
Cai, Yong; Li, Xiwen; Wang, Runmiao; Yang, Qing; Li, Peng; Hu, Hao
2016-01-01
Currently, the chemical fingerprint comparison and analysis is mainly based on professional equipment and software, it’s expensive and inconvenient. This study aims to integrate QR (Quick Response) code with quality data and mobile intelligent technology to develop a convenient query terminal for tracing quality in the whole industrial chain of TCM (traditional Chinese medicine). Three herbal medicines were randomly selected and their chemical two-dimensional barcode (2D) barcodes fingerprints were constructed. Smartphone application (APP) based on Android system was developed to read initial data of 2D chemical barcodes, and compared multiple fingerprints from different batches of same species or different species. It was demonstrated that there were no significant differences between original and scanned TCM chemical fingerprints. Meanwhile, different TCM chemical fingerprint QR codes could be rendered in the same coordinate and showed the differences very intuitively. To be able to distinguish the variations of chemical fingerprint more directly, linear interpolation angle cosine similarity algorithm (LIACSA) was proposed to get similarity ratio. This study showed that QR codes can be used as an effective information carrier to transfer quality data. Smartphone application can rapidly read quality information in QR codes and convert data into TCM chemical fingerprints. PMID:27780256
Android platform based smartphones for a logistical remote association repair framework.
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-06-25
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use.
On the reliable and flexible solution of practical subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm for solving subset regression problems is described. The algorithm performs a QR decomposition with a new column-pivoting strategy, which permits subset selection directly from the originally defined regression parameters. This, in combination with a number of extensions of the new technique, makes the method a very flexible tool for analyzing subset regression problems in which the parameters have a physical meaning.
Sundararajan, Lakshmi; Lundquist, Erik A
2012-12-01
Migration of neurons and neural crest cells is of central importance to the development of nervous systems. In Caenorhabditis elegans, the QL neuroblast on the left migrates posteriorly, and QR on the right migrates anteriorly, despite similar lineages and birth positions with regard to the left-right axis. Initial migration is independent of a Wnt signal that controls later anterior-posterior Q descendant migration. Previous studies showed that the transmembrane proteins UNC-40/DCC and MIG-21, a novel thrombospondin type I repeat containing protein, act redundantly in left-side QL posterior migration. Here we show that the LAR receptor protein tyrosine phosphatase PTP-3 acts with MIG-21 in parallel to UNC-40 in QL posterior migration. We also show that in right-side QR, the UNC-40 and PTP-3/MIG-21 pathways mutually inhibit each other's role in posterior migration, allowing anterior QR migration. Finally, we present evidence that these proteins act autonomously in the Q neuroblasts. These studies indicate an inherent left-right asymmetry in the Q neuroblasts with regard to UNC-40, PTP-3, and MIG-21 function that results in posterior vs. anterior migration.
20-GFLOPS QR processor on a Xilinx Virtex-E FPGA
NASA Astrophysics Data System (ADS)
Walke, Richard L.; Smith, Robert W. M.; Lightbody, Gaye
2000-11-01
Adaptive beamforming can play an important role in sensor array systems in countering directional interference. In high-sample rate systems, such as radar and comms, the calculation of adaptive weights is a very computational task that requires highly parallel solutions. For systems where low power consumption and volume are important the only viable implementation is as an Application Specific Integrated Circuit (ASIC). However, the rapid advancement of Field Programmable Gate Array (FPGA) technology is enabling highly credible re-programmable solutions. In this paper we present the implementation of a scalable linear array processor for weight calculation using QR decomposition. We employ floating-point arithmetic with mantissa size optimized to the target application to minimize component size, and implement them as relationally placed macros (RPMs) on Xilinx Virtex FPGAs to achieve predictable dense layout and high-speed operation. We present results that show that 20GFLOPS of sustained computation on a single XCV3200E-8 Virtex-E FPGA is possible. We also describe the parameterized implementation of the floating-point operators and QR-processor, and the design methodology that enables us to rapidly generate complex FPGA implementations using the industry standard hardware description language VHDL.
Android Platform Based Smartphones for a Logistical Remote Association Repair Framework
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-01-01
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603
O'Donoghue, Patrick; Luthey-Schulten, Zaida
2005-02-25
We present a new algorithm, based on the multidimensional QR factorization, to remove redundancy from a multiple structural alignment by choosing representative protein structures that best preserve the phylogenetic tree topology of the homologous group. The classical QR factorization with pivoting, developed as a fast numerical solution to eigenvalue and linear least-squares problems of the form Ax=b, was designed to re-order the columns of A by increasing linear dependence. Removing the most linear dependent columns from A leads to the formation of a minimal basis set which well spans the phase space of the problem at hand. By recasting the problem of redundancy in multiple structural alignments into this framework, in which the matrix A now describes the multiple alignment, we adapted the QR factorization to produce a minimal basis set of protein structures which best spans the evolutionary (phase) space. The non-redundant and representative profiles obtained from this procedure, termed evolutionary profiles, are shown in initial results to outperform well-tested profiles in homology detection searches over a large sequence database. A measure of structural similarity between homologous proteins, Q(H), is presented. By properly accounting for the effect and presence of gaps, a phylogenetic tree computed using this metric is shown to be congruent with the maximum-likelihood sequence-based phylogeny. The results indicate that evolutionary information is indeed recoverable from the comparative analysis of protein structure alone. Applications of the QR ordering and this structural similarity metric to analyze the evolution of structure among key, universally distributed proteins involved in translation, and to the selection of representatives from an ensemble of NMR structures are also discussed.
Solving the scalability issue in quantum-based refinement: Q|R#1.
Zheng, Min; Moriarty, Nigel W; Xu, Yanting; Reimers, Jeffrey R; Afonine, Pavel V; Waller, Mark P
2017-12-01
Accurately refining biomacromolecules using a quantum-chemical method is challenging because the cost of a quantum-chemical calculation scales approximately as n m , where n is the number of atoms and m (≥3) is based on the quantum method of choice. This fundamental problem means that quantum-chemical calculations become intractable when the size of the system requires more computational resources than are available. In the development of the software package called Q|R, this issue is referred to as Q|R#1. A divide-and-conquer approach has been developed that fragments the atomic model into small manageable pieces in order to solve Q|R#1. Firstly, the atomic model of a crystal structure is analyzed to detect noncovalent interactions between residues, and the results of the analysis are represented as an interaction graph. Secondly, a graph-clustering algorithm is used to partition the interaction graph into a set of clusters in such a way as to minimize disruption to the noncovalent interaction network. Thirdly, the environment surrounding each individual cluster is analyzed and any residue that is interacting with a particular cluster is assigned to the buffer region of that particular cluster. A fragment is defined as a cluster plus its buffer region. The gradients for all atoms from each of the fragments are computed, and only the gradients from each cluster are combined to create the total gradients. A quantum-based refinement is carried out using the total gradients as chemical restraints. In order to validate this interaction graph-based fragmentation approach in Q|R, the entire atomic model of an amyloid cross-β spine crystal structure (PDB entry 2oNA) was refined.
Learning Qualitative Differential Equation models: a survey of algorithms and applications.
Pang, Wei; Coghill, George M
2010-03-01
Over the last two decades, qualitative reasoning (QR) has become an important domain in Artificial Intelligence. QDE (Qualitative Differential Equation) model learning (QML), as a branch of QR, has also received an increasing amount of attention; many systems have been proposed to solve various significant problems in this field. QML has been applied to a wide range of fields, including physics, biology and medical science. In this paper, we first identify the scope of this review by distinguishing QML from other QML systems, and then review all the noteworthy QML systems within this scope. The applications of QML in several application domains are also introduced briefly. Finally, the future directions of QML are explored from different perspectives.
Learning Qualitative Differential Equation models: a survey of algorithms and applications
PANG, WEI; COGHILL, GEORGE M.
2013-01-01
Over the last two decades, qualitative reasoning (QR) has become an important domain in Artificial Intelligence. QDE (Qualitative Differential Equation) model learning (QML), as a branch of QR, has also received an increasing amount of attention; many systems have been proposed to solve various significant problems in this field. QML has been applied to a wide range of fields, including physics, biology and medical science. In this paper, we first identify the scope of this review by distinguishing QML from other QML systems, and then review all the noteworthy QML systems within this scope. The applications of QML in several application domains are also introduced briefly. Finally, the future directions of QML are explored from different perspectives. PMID:23704803
The algebraic decoding of the (41, 21, 9) quadratic residue code
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Truong, T. K.; Chen, Xuemin; Yin, Xiaowei
1992-01-01
A new algebraic approach for decoding the quadratic residue (QR) codes, in particular the (41, 21, 9) QR code is presented. The key ideas behind this decoding technique are a systematic application of the Sylvester resultant method to the Newton identities associated with the code syndromes to find the error-locator polynomial, and next a method for determining error locations by solving certain quadratic, cubic and quartic equations over GF(2 exp m) in a new way which uses Zech's logarithms for the arithmetic. The algorithms developed here are suitable for implementation in a programmable microprocessor or special-purpose VLSI chip. It is expected that the algebraic methods developed here can apply generally to other codes such as the BCH and Reed-Solomon codes.
An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process
NASA Astrophysics Data System (ADS)
Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre
2015-02-01
This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
NASA Astrophysics Data System (ADS)
Barbu, Alina L.; Laurent-Varin, Julien; Perosanz, Felix; Mercier, Flavien; Marty, Jean-Charles
2018-01-01
The implementation into the GINS CNES geodetic software of a more efficient filter was needed to satisfy the users who wanted to compute high-rate GNSS PPP solutions. We selected the SRI approach and a QR factorization technique including an innovative algorithm which optimizes the matrix reduction step. A full description of this algorithm is given for future users. The new capacities of the software have been tested using a set of 1 Hz data from the Japanese GEONET network including the Mw 9.0 2011 Tohoku earthquake. Station coordinates solution agreed at a sub-decimeter level with previous publications as well as with solutions we computed with the National Resource Canada software. An additional benefit from the implementation of the SRI filter is the capability to estimate high-rate tropospheric parameters too. As the CPU time to estimate a 1 Hz kinematic solution from 1 h of data is now less than 1 min we could produced series of coordinates for the full 1300 stations of the Japanese network. The corresponding movie shows the impressive co-seismic deformation as well as the wave propagation along the island. The processing was straightforward using a cluster of PCs which illustrates the new potentiality of the GINS software for massive network high rate PPP processing.
Clement, Fatima; Pramod, Siddanakoppalu N; Venkatesh, Yeldur P
2010-03-01
Garlic (Allium sativum), an important medicinal spice, displays a plethora of biological effects including immunomodulation. Although some immunomodulatory proteins from garlic have been described, their identities are still unknown. The present study was envisaged to isolate immunomodulatory proteins from raw garlic, and examine their effects on certain cells of the immune system (lymphocytes, mast cells, and basophils) in relation to mitogenicity and hypersensitivity. Three protein components of approximately 13 kD (QR-1, QR-2, and QR-3 in the ratio 7:28:1) were separated by Q-Sepharose chromatography of 30 kD ultrafiltrate of raw garlic extract. All the 3 proteins exhibited mitogenic activity towards human peripheral blood lymphocytes, murine splenocytes and thymocytes. The mitogenicity of QR-2 was the highest among the three immunomodulatory proteins. QR-1 and QR-2 displayed hemagglutination and mannose-binding activities; QR-3 showed only mannose-binding activity. Immunoreactivity of rabbit anti-QR-1 and anti-QR-2 polyclonal antisera showed specificity for their respective antigens as well as mutual cross-reactivity; QR-3 was better recognized by anti-QR-2 (82%) than by anti-QR-1 (55%). QR-2 induced a 2-fold higher histamine release in vitro from leukocytes of atopic subjects compared to that of non-atopic subjects. In all functional studies, QR-2 was more potent compared to QR-1. Taken together, all these results indicate that the two major proteins QR-2 and QR-1 present in a ratio of 4:1 in raw garlic contribute to garlic's immunomodulatory activity, and their characteristics are markedly similar to the abundant Allium sativum agglutinins (ASA) I and II, respectively. Copyright 2010 Elsevier B.V. All rights reserved.
Towards Batched Linear Solvers on Accelerated Hardware Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haidar, Azzam; Dong, Tingzing Tim; Tomov, Stanimire
2015-01-01
As hardware evolves, an increasingly effective approach to develop energy efficient, high-performance solvers, is to design them to work on many small and independent problems. Indeed, many applications already need this functionality, especially for GPUs, which are known to be currently about four to five times more energy efficient than multicore CPUs for every floating-point operation. In this paper, we describe the development of the main one-sided factorizations: LU, QR, and Cholesky; that are needed for a set of small dense matrices to work in parallel. We refer to such algorithms as batched factorizations. Our approach is based on representingmore » the algorithms as a sequence of batched BLAS routines for GPU-contained execution. Note that this is similar in functionality to the LAPACK and the hybrid MAGMA algorithms for large-matrix factorizations. But it is different from a straightforward approach, whereby each of GPU's symmetric multiprocessors factorizes a single problem at a time. We illustrate how our performance analysis together with the profiling and tracing tools guided the development of batched factorizations to achieve up to 2-fold speedup and 3-fold better energy efficiency compared to our highly optimized batched CPU implementations based on the MKL library on a two-sockets, Intel Sandy Bridge server. Compared to a batched LU factorization featured in the NVIDIA's CUBLAS library for GPUs, we achieves up to 2.5-fold speedup on the K40 GPU.« less
Sundararajan, Lakshmi; Lundquist, Erik A.
2012-01-01
Migration of neurons and neural crest cells is of central importance to the development of nervous systems. In Caenorhabditis elegans, the QL neuroblast on the left migrates posteriorly, and QR on the right migrates anteriorly, despite similar lineages and birth positions with regard to the left–right axis. Initial migration is independent of a Wnt signal that controls later anterior–posterior Q descendant migration. Previous studies showed that the transmembrane proteins UNC-40/DCC and MIG-21, a novel thrombospondin type I repeat containing protein, act redundantly in left-side QL posterior migration. Here we show that the LAR receptor protein tyrosine phosphatase PTP-3 acts with MIG-21 in parallel to UNC-40 in QL posterior migration. We also show that in right-side QR, the UNC-40 and PTP-3/MIG-21 pathways mutually inhibit each other’s role in posterior migration, allowing anterior QR migration. Finally, we present evidence that these proteins act autonomously in the Q neuroblasts. These studies indicate an inherent left–right asymmetry in the Q neuroblasts with regard to UNC-40, PTP-3, and MIG-21 function that results in posterior vs. anterior migration. PMID:23051647
QR code for medical information uses.
Fontelo, Paul; Liu, Fang; Ducut, Erick G
2008-11-06
We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.
Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.
Park, Jongin; Wi, Seok-Min; Lee, Jin S
2016-02-01
Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
Interactive QR code beautification with full background image embedding
NASA Astrophysics Data System (ADS)
Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo
2017-06-01
QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.
Classical stability of M/sup p/qr, Q/sup p/qr, and N/sup p/qr in d = 11 supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasuda, O.
1984-09-24
We investigate the classical stability of Freund-Rubin--type solutions M/sup p/qr (SU(3) x SU(2) x U(1)/SU(2) x U(1) x U(1)), Q/sup p/qr (SU(2) x SU(2) x SU(2)/U(1) x U(1)), and N/sup p/qr (SU(3) x U(1)/U(1) x U(1)) against relative dilatations between the coset directions. It is shown that M/sup p/qr is stable only for (98/243)< or =p/sup 2//q/sup 2/< or =(6358/ 4563), Q/sup p/qr is stable only for a certain region of p/sup 2//r/sup 2/ and q/sup 2//r/sup 2/, while N/sup p/qr is stable for any p/sup 2//q/sup 2/ against these small fluctuations.
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
Oxford, John S; Lambkin, Robert; Guralnik, Mario; Rosenbloom, Richard A; Petteruti, Michael P; Digian, Kelly; Lefante, Carolyn
2007-01-01
Prophylaxis against influenza is difficult, and current approaches against pandemics may be ineffective because of shortages of the two proven classes of antivirals in the face of a large-scale infection. Herbal/natural products may represent an effective alternative to conventional attempts to protect against infection by avian influenza virus. QR-435, an all-natural compound of green tea extract and other agents, has been developed to provide protection against a wide range of viral infections. The antiviral activities of several QR-435 preparations as well as QR-435 (1) green tea extract were tested against A/Sydney/5/97 and A/Panama-Resvir 17 strains of avian influenza virus H3N2 by means of an assay based on Madin-Darby canine kidney cells. Toxic effects of QR-435 formulations on these cells were also evaluated as were the virucidal properties of a commercially available mask impregnated with QR-435. The efficacy of a QR-435/mask combination was compared with that of the QR control/mask combination, an untreated mask, and no mask. QR-435 had significant in vitro activity against H3N2 at concentrations that were not associated with significant cellular toxic effects. The antiviral activity of QR-435 (1) was similar to that of QR-435. Masks impregnated with QR-435 were highly effective in blocking the passage of live H3N2 virus. These preclinical results warrant further evaluation of the prophylactic use of QR-435 against viral infection in humans.
Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.
Uzun, Vassilya; Bilgin, Sami
2016-01-01
For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.
X-ray structural studies of quinone reductase 2 nanomolar range inhibitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pegan, Scott D.; Sturdy, Megan; Ferry, Gilles
Quinone reductase 2 (QR2) is one of two members comprising the mammalian quinone reductase family of enzymes responsible for performing FAD mediated reductions of quinone substrates. In contrast to quinone reductase 1 (QR1) which uses NAD(P)H as its co-substrate, QR2 utilizes a rare group of hydride donors, N-methyl or N-ribosyl nicotinamide. Several studies have linked QR2 to the generation of quinone free radicals, several neuronal degenerative diseases, and cancer. QR2 has been also identified as the third melatonin receptor (MT3) through in cellulo and in vitro inhibition of QR2 by traditional MT3 ligands, and through recent X-ray structures of humanmore » QR2 (hQR2) in complex with melatonin and 2-iodomelatonin. Several MT3 specific ligands have been developed that exhibit both potent in cellulo inhibition of hQR2 nanomolar, affinity for MT3. The potency of these ligands suggest their use as molecular probes for hQR2. However, no definitive correlation between traditionally obtained MT3 ligand affinity and hQR2 inhibition exists limiting our understanding of how these ligands are accommodated in the hQR2 active site. To obtain a clearer relationship between the structures of developed MT3 ligands and their inhibitory properties, in cellulo and in vitro IC{sub 50} values were determined for a representative set of MT3 ligands (MCA-NAT, 2-I-MCANAT, prazosin, S26695, S32797, and S29434). Furthermore, X-ray structures for each of these ligands in complex with hQR2 were determined allowing for a structural evaluation of the binding modes of these ligands in relation to the potency of MT3 ligands.« less
Randomized Trial of Smartphone-Based Evaluation for an Obstetrics and Gynecology Clerkship.
Sobhani, Nasim C; Fay, Emily E; Schiff, Melissa A; Stephenson-Famy, Alyssa; Debiec, Katherine E
2017-12-19
We hypothesized that compared to paper evaluations, a smartphone-based quick response (QR) evaluation tool would improve timeliness of feedback, enhance efficacy of giving and receiving feedback, and be as easy to use. We performed a randomized controlled trial of student and instructor experience with two evaluation tools in the OB/GYN clerkship at University of Washington School of Medicine (UWSOM). Sites were randomized to the QR or paper tool; students at QR sites received individualized QR codes at the beginning of the clerkship. Instructors and students completed postintervention surveys regarding the evaluation tool and associated feedback. We compared responses between groups using chi-squared tests. Participating clerkship sites included primary, tertiary, private practice and institutional settings affiliated with the University of Washington in the Washington, Wyoming, Alaska, Montana and Idaho region. Of the 29 OB/GYN UWSOM clerkship sites, 18 agreed to participate and were randomized. Of 29 eligible instructors, 25 (86%) completed the survey, with n = 18 using QR and n = 7 using paper. Of 161 eligible students, 102 (63%) completed the survey, with n = 54 using QR and n = 48 using paper. Compared to those using paper evaluations, instructors using QR evaluations were significantly more likely to agree that the evaluation tool was easy to understand (100% QR vs 43% paper, p = 0.002), the tool was effective in providing feedback (78% QR vs 29% paper, p = 0.002), and they felt comfortable approaching students with the tool (89% QR vs 43% paper, p = 0.002). Compared to those using paper evaluations, students using QR evaluations were less likely to agree the tool was effective in eliciting feedback (QR 43% vs paper 55%, p = 0.042). Instructors found QR evaluations superior to paper evaluations for providing feedback to medical students, whereas students found QR evaluations less effective for feedback. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Research on distributed heterogeneous data PCA algorithm based on cloud platform
NASA Astrophysics Data System (ADS)
Zhang, Jin; Huang, Gang
2018-05-01
Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.
Optical Data Processing for Missile Guidance.
1984-11-21
and architectures for back -substitution and the solution of triangular systems of LAEs (linear algebraic equations). Most recently, a parallel QR...Calculation of I1 is quite difficult since the o T exact Z matrix is quite ill-conditioned. The two VC choices considered in our system are E - I and E I - 0...shown in fig. 1. It These operations are most commonly referred to as shows the ship in water with a sky and shoreline back - segmentation and also
Authenticated communication from quantum readout of PUFs
NASA Astrophysics Data System (ADS)
Škorić, Boris; Pinkse, Pepijn W. H.; Mosk, Allard P.
2017-08-01
Quantum readout of physical unclonable functions (PUFs) is a recently introduced method for remote authentication of objects. We present an extension of the protocol to enable the authentication of data: A verifier can check if received classical data were sent by the PUF holder. We call this modification QR-d or, in the case of the optical-PUF implementation, QSA-d. We discuss how QSA-d can be operated in a parallel way. We also present a protocol for authenticating quantum states.
Studies on Radar Sensor Networks
2007-08-08
scheme in which 2-D image was created via adding voltages with the appropriate time offset. Simulation results show that our DCT-based scheme works...using RSNs in terms of the probability of miss detection PMD and the root mean square error (RMSE). Simulation results showed that multi-target detection... Simulation results are presented to evaluate the feasibility and effectiveness of the proposed JMIC algorithm in a query surveillance region. 5 SVD-QR and
NASA Astrophysics Data System (ADS)
Cui, Boya; Kielb, Edward; Luo, Jiajun; Tang, Yang; Grayson, Matthew
Superlattices and narrow gap semiconductors often host multiple conducting species, such as electrons and holes, requiring a mobility spectral analysis (MSA) method to separate contributions to the conductivity. Here, a least-squares MSA method is introduced: the QR-algorithm Fourier-domain MSA (FMSA). Like other MSA methods, the FMSA sorts the conductivity contributions of different carrier species from magnetotransport measurements, arriving at a best fit to the experimentally measured longitudinal and Hall conductivities σxx and σxy, respectively. This method distinguishes itself from other methods by using the so-called QR-algorithm of linear algebra to achieve rapid convergence of the mobility spectrum as the solution to an eigenvalue problem, and by alternately solving this problem in both the mobility domain and its Fourier reciprocal-space. The result accurately fits a mobility range spanning nearly four orders of magnitude (μ = 300 to 1,000,000 cm2/V .s). This method resolves the mobility spectra as well as, or better than, competing MSA methods while also achieving high computational efficiency, requiring less than 30 second on average to converge to a solution on a standard desktop computer. Acknowledgement: Funded by AFOSR FA9550-15-1-0377 and AFOSR FA9550-15-1-0247.
Utility of QR codes in biological collections
Diazgranados, Mauricio; Funk, Vicki A.
2013-01-01
Abstract The popularity of QR codes for encoding information such as URIs has increased exponentially in step with the technological advances and availability of smartphones, digital tablets, and other electronic devices. We propose using QR codes on specimens in biological collections to facilitate linking vouchers’ electronic information with their associated collections. QR codes can efficiently provide such links for connecting collections, photographs, maps, ecosystem notes, citations, and even GenBank sequences. QR codes have numerous advantages over barcodes, including their small size, superior security mechanisms, increased complexity and quantity of information, and low implementation cost. The scope of this paper is to initiate an academic discussion about using QR codes on specimens in biological collections. PMID:24198709
Utility of QR codes in biological collections.
Diazgranados, Mauricio; Funk, Vicki A
2013-01-01
The popularity of QR codes for encoding information such as URIs has increased exponentially in step with the technological advances and availability of smartphones, digital tablets, and other electronic devices. We propose using QR codes on specimens in biological collections to facilitate linking vouchers' electronic information with their associated collections. QR codes can efficiently provide such links for connecting collections, photographs, maps, ecosystem notes, citations, and even GenBank sequences. QR codes have numerous advantages over barcodes, including their small size, superior security mechanisms, increased complexity and quantity of information, and low implementation cost. The scope of this paper is to initiate an academic discussion about using QR codes on specimens in biological collections.
Experimental QR code optical encryption: noise-free data recovering.
Barrera, John Fredy; Mira-Agudelo, Alejandro; Torroba, Roberto
2014-05-15
We report, to our knowledge for the first time, the experimental implementation of a quick response (QR) code as a "container" in an optical encryption system. A joint transform correlator architecture in an interferometric configuration is chosen as the experimental scheme. As the implementation is not possible in a single step, a multiplexing procedure to encrypt the QR code of the original information is applied. Once the QR code is correctly decrypted, the speckle noise present in the recovered QR code is eliminated by a simple digital procedure. Finally, the original information is retrieved completely free of any kind of degradation after reading the QR code. Additionally, we propose and implement a new protocol in which the reception of the encrypted QR code and its decryption, the digital block processing, and the reading of the decrypted QR code are performed employing only one device (smartphone, tablet, or computer). The overall method probes to produce an outcome far more attractive to make the adoption of the technique a plausible option. Experimental results are presented to demonstrate the practicality of the proposed security system.
Amperometric monitoring of quercetin permeation through skin membranes.
Rembiesa, Jadwiga; Gari, Hala; Engblom, Johan; Ruzgas, Tautgirdas
2015-12-30
Transdermal delivery of quercetin (QR, 3,3',4',5,7-pentahydroxyflavone), a natural flavonoid with a considerable antioxidant capacity, is important for medical treatment of, e.g., skin disorders. QR permeability through skin is low, which, at the same time, makes the monitoring of percutaneous QR penetration difficult. The objective of this study was to assess an electrochemical method for monitoring QR penetration through skin membranes. An electrode was covered with the membrane, exposed to QR solution, and electrode current was measured. The registered current was due to electro-oxidation of QR penetrating the membrane. Exploiting strict current-QR flux relationships diffusion coefficient, D, of QR in skin and dialysis membranes was calculated. The D values were strongly dependent on the theoretical model and parameters assumed in the processing of the amperometric data. The highest values of D were in the range of 1.6-6.1×10(-7)cm(2)/s. This was reached only for skin membranes pretreated with buffer-ethanol mixture for more than 24h. QR solutions containing penetration enhancers, ethanol and l-menthol, definitely increased D values. The results demonstrate that electrochemical setup gives a possibility to assess penetration characteristics as well as enables monitoring of penetration dynamics, which is more difficult by traditional methods using Franz cells. Copyright © 2015 Elsevier B.V. All rights reserved.
Efficacy and Safety of Bromocriptine-QR in Type 2 Diabetes: A Systematic Review and Meta-Analysis.
Liang, W; Gao, L; Li, N; Wang, B; Wang, L; Wang, Y; Yang, H; You, L; Hou, J; Chen, S; Zhu, H; Jiang, Y; Pan, H
2015-10-01
Bromocriptine-QR (quick release) is a novel treatment for type 2 diabetes. The objective of this study is to assess the efficacy and safety of bromocriptine-QR in adults with type 2 diabetes mellitus based on randomized controlled trials published in peer-reviewed journals or as abstracts. We performed a comprehensive literature search of MEDLINE, Pubmed, Web of Science, EMBASE, and the Cochrane Library up to May 2015. Randomized controlled trials of bromocriptine-QR therapy in type 2 diabetes mellitus were eligible. Two reviewers independently assessed the eligibility of trials based on predefined inclusion criteria. Information was collected concerning basic study data, patient characteristics, efficacy and safety outcomes, and methodological quality. Bromocriptine-QR add-on therapy lowered hemoglobin A1c compared with placebo (weighted mean difference, - 6.52 mmol/mol; 95% CI, - 8.07 to - 4.97 mmol/mol). Bromocriptine-QR exhibited an increase in achieving an HbA1c level ≤ 53 mmol/mol (≤ 7.0%) (32.0 vs. 9.5%; odds ratio, 4.57; 95% CI, 2.42-8.62). Fasting plasma glucose was reduced with bromocriptine-QR compared with placebo (weighted mean difference,-1.04 mmol/l; 95% CI,-1.49 to-0.59 mmol/l). Moreover, bromocriptine-QR had neutral effects on postprandial glycemia, Body Mass Index (BMI), and lipid profile. Bromocriptine-QR had more gastrointestinal side effects of nausea and vomiting. Bromocriptine-QR had no increased risk of hypoglycemia, hypotension, or cardiovascular effects. Bromocriptine-QR therapy offers an alternative option to currently available antidiabetic agents for type 2 diabetes mellitus adults. Neither hypoglycemia nor other metabolic changes occur with this drug. More data for long-term efficacy and safety are needed for further observation. © Georg Thieme Verlag KG Stuttgart · New York.
Singular value decomposition utilizing parallel algorithms on graphical processors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotas, Charlotte W; Barhen, Jacob
2011-01-01
One of the current challenges in underwater acoustic array signal processing is the detection of quiet targets in the presence of noise. In order to enable robust detection, one of the key processing steps requires data and replica whitening. This, in turn, involves the eigen-decomposition of the sample spectral matrix, Cx = 1/K xKX(k)XH(k) where X(k) denotes a single frequency snapshot with an element for each element of the array. By employing the singular value decomposition (SVD) method, the eigenvectors and eigenvalues can be determined directly from the data without computing the sample covariance matrix, reducing the computational requirements formore » a given level of accuracy (van Trees, Optimum Array Processing). (Recall that the SVD of a complex matrix A involves determining V, , and U such that A = U VH where U and V are orthonormal and is a positive, real, diagonal matrix containing the singular values of A. U and V are the eigenvectors of AAH and AHA, respectively, while the singular values are the square roots of the eigenvalues of AAH.) Because it is desirable to be able to compute these quantities in real time, an efficient technique for computing the SVD is vital. In addition, emerging multicore processors like graphical processing units (GPUs) are bringing parallel processing capabilities to an ever increasing number of users. Since the computational tasks involved in array signal processing are well suited for parallelization, it is expected that these computations will be implemented using GPUs as soon as users have the necessary computational tools available to them. Thus, it is important to have an SVD algorithm that is suitable for these processors. This work explores the effectiveness of two different parallel SVD implementations on an NVIDIA Tesla C2050 GPU (14 multiprocessors, 32 cores per multiprocessor, 1.15 GHz clock - peed). The first algorithm is based on a two-step algorithm which bidiagonalizes the matrix using Householder transformations, and then diagonalizes the intermediate bidiagonal matrix through implicit QR shifts. This is similar to that implemented for real matrices by Lahabar and Narayanan ("Singular Value Decomposition on GPU using CUDA", IEEE International Parallel Distributed Processing Symposium 2009). The implementation is done in a hybrid manner, with the bidiagonalization stage done using the GPU while the diagonalization stage is done using the CPU, with the GPU used to update the U and V matrices. The second algorithm is based on a one-sided Jacobi scheme utilizing a sequence of pair-wise column orthogonalizations such that A is replaced by AV until the resulting matrix is sufficiently orthogonal (that is, equal to U ). V is obtained from the sequence of orthogonalizations, while can be found from the square root of the diagonal elements of AH A and, once is known, U can be found from column scaling the resulting matrix. These implementations utilize CUDA Fortran and NVIDIA's CUB LAS library. The primary goal of this study is to quantify the comparative performance of these two techniques against themselves and other standard implementations (for example, MATLAB). Considering that there is significant overhead associated with transferring data to the GPU and with synchronization between the GPU and the host CPU, it is also important to understand when it is worthwhile to use the GPU in terms of the matrix size and number of concurrent SVDs to be calculated.« less
A simple suboptimal least-squares algorithm for attitude determination with multiple sensors
NASA Technical Reports Server (NTRS)
Brozenec, Thomas F.; Bender, Douglas J.
1994-01-01
Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.
Ink-constrained halftoning with application to QR codes
NASA Astrophysics Data System (ADS)
Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary
2014-01-01
This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.
Fast sparse recovery and coherence factor weighting in optoacoustic tomography
NASA Astrophysics Data System (ADS)
He, Hailong; Prakash, Jaya; Buehler, Andreas; Ntziachristos, Vasilis
2017-03-01
Sparse recovery algorithms have shown great potential to reconstruct images with limited view datasets in optoacoustic tomography, with a disadvantage of being computational expensive. In this paper, we improve the fast convergent Split Augmented Lagrangian Shrinkage Algorithm (SALSA) method based on least square QR (LSQR) formulation for performing accelerated reconstructions. Further, coherence factor is calculated to weight the final reconstruction result, which can further reduce artifacts arising in limited-view scenarios and acoustically heterogeneous mediums. Several phantom and biological experiments indicate that the accelerated SALSA method with coherence factor (ASALSA-CF) can provide improved reconstructions and much faster convergence compared to existing sparse recovery methods.
NASA Astrophysics Data System (ADS)
Trejos, Sorayda; Fredy Barrera, John; Torroba, Roberto
2015-08-01
We present for the first time an optical encrypting-decrypting protocol for recovering messages without speckle noise. This is a digital holographic technique using a 2f scheme to process QR codes entries. In the procedure, letters used to compose eventual messages are individually converted into a QR code, and then each QR code is divided into portions. Through a holographic technique, we store each processed portion. After filtering and repositioning, we add all processed data to create a single pack, thus simplifying the handling and recovery of multiple QR code images, representing the first multiplexing procedure applied to processed QR codes. All QR codes are recovered in a single step and in the same plane, showing neither cross-talk nor noise problems as in other methods. Experiments have been conducted using an interferometric configuration and comparisons between unprocessed and recovered QR codes have been performed, showing differences between them due to the involved processing. Recovered QR codes can be successfully scanned, thanks to their noise tolerance. Finally, the appropriate sequence in the scanning of the recovered QR codes brings a noiseless retrieved message. Additionally, to procure maximum security, the multiplexed pack could be multiplied by a digital diffuser as to encrypt it. The encrypted pack is easily decoded by multiplying the multiplexing with the complex conjugate of the diffuser. As it is a digital operation, no noise is added. Therefore, this technique is threefold robust, involving multiplexing, encryption, and the need of a sequence to retrieve the outcome.
ERIC Educational Resources Information Center
Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed
2013-01-01
Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…
QR Codes as Finding Aides: Linking Electronic and Print Library Resources
ERIC Educational Resources Information Center
Kane, Danielle; Schneidewind, Jeff
2011-01-01
As part of a focused, methodical, and evaluative approach to emerging technologies, QR codes are one of many new technologies being used by the UC Irvine Libraries. QR codes provide simple connections between print and virtual resources. In summer 2010, a small task force began to investigate how QR codes could be used to provide information and…
Kim, Beom-Chan; Hwang, Hyun-Jung; An, Hyoung-Tae; Lee, Hyun; Park, Jun-Sub; Hong, Jin; Ko, Jesang; Kim, Chungho; Lee, Jae-Seon; Ko, Young-Gyu
2016-01-01
We previously demonstrated that cell-surface gC1qR is a key regulator of lamellipodia formation and cancer metastasis. Here, we screened a monoclonal mouse antibody against gC1qR to prevent cell migration by neutralizing cell-surface gC1qR. The anti-gC1qR antibody prevented growth factor-stimulated lamellipodia formation, cell migration and focal adhesion kinase activation by inactivating receptor tyrosine kinases (RTKs) in various cancer cells such as A549, MDA-MB-231, MCF7 and HeLa cells. The antibody neutralization of cell-surface gC1qR also inhibited angiogenesis because the anti-gC1qR antibody prevented growth factor-stimulated RTK activation, lamellipodia formation, cell migration and tube formation in HUVEC. In addition, we found that A549 tumorigenesis was reduced in a xenograft mouse model by following the administration of the anti-gC1qR antibody. With these data, we can conclude that the antibody neutralization of cell-surface gC1qR could be a good therapeutic strategy for cancer treatment. PMID:27363031
Avidan, Alexander; Weissman, Charles; Levin, Phillip D
2015-04-01
Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bolling, Bradley W; Parkin, Kirk L
2008-11-26
The fractionation of soy flour directed by a cellular bioassay for induction of phase 2 detoxification enzymes was used to identify quinone reductase (QR) inducing agents. A phospholipid-depleted, 80% methanol-partitioned isolate from a crude ethanol extract of soy flour was resolved using normal phase medium-pressure liquid chromatography (MPLC). Early eluting fractions were found to be the most potent QR inducing agents among the separated fractions. Fraction 2 was the most potent, doubling QR at <2 mug/mL. Further fractionation of this isolate led to the identification of several constituents. Fatty acids and sn-1 and sn-2 monoacylglycerols were identified, but were not highly potent QR inducers. Benzofuran-3-carbaldehyde, 4-hydroxybenzaldeyde, 4-ethoxybenzoic acid, 4-ethoxycinnamic acid, benzofuran-2-carboxylic ethyl ester, and ferulic acid ethyl ester (FAEE) were also identified as QR inducing constituents of this fraction. FAEE was the most potent of the identified constituents, doubling QR specific activity at 3.2 muM in the cellular bioassay.
An introduction to QR Codes: linking libraries and mobile patrons.
Hoy, Matthew B
2011-01-01
QR codes, or "Quick Response" codes, are two-dimensional barcodes that can be scanned by mobile smartphone cameras. These codes can be used to provide fast access to URLs, telephone numbers, and short passages of text. With the rapid adoption of smartphones, librarians are able to use QR codes to promote services and help library users find materials quickly and independently. This article will explain what QR codes are, discuss how they can be used in the library, and describe issues surrounding their use. A list of resources for generating and scanning QR codes is also provided.
Convolutional encoding of self-dual codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1994-01-01
There exist almost complete convolutional encodings of self-dual codes, i.e., block codes of rate 1/2 with weights w, w = 0 mod 4. The codes are of length 8m with the convolutional portion of length 8m-2 and the nonsystematic information of length 4m-1. The last two bits are parity checks on the two (4m-1) length parity sequences. The final information bit complements one of the extended parity sequences of length 4m. Solomon and van Tilborg have developed algorithms to generate these for the Quadratic Residue (QR) Codes of lengths 48 and beyond. For these codes and reasonable constraint lengths, there are sequential decodings for both hard and soft decisions. There are also possible Viterbi-type decodings that may be simple, as in a convolutional encoding/decoding of the extended Golay Code. In addition, the previously found constraint length K = 9 for the QR (48, 24;12) Code is lowered here to K = 8.
ERIC Educational Resources Information Center
Chin, Kai-Yi; Lee, Ko-Fong; Chen, Yen-Lin
2015-01-01
This study developed a QR-based U-Learning Material Production System (QR-ULMPS) that provides teachers with an education tool to motivate college level students enrolled in a liberal arts course. QR-ULMPS was specifically designed to support the development of u-learning materials and create an engaging context-aware u-learning environment for…
Cytoadhesion to gC1qR through Plasmodium falciparum Erythrocyte Membrane Protein 1 in Severe Malaria
Magallón-Tejada, Ariel; Machevo, Sónia; Cisteró, Pau; Lavstsen, Thomas; Aide, Pedro; Jiménez, Alfons; Turner, Louise; Gupta, Himanshu; De Las Salas, Briegel; Mandomando, Inacio; Wang, Christian W.; Petersen, Jens E. V.; Muñoz, Jose; Gascón, Joaquim; Macete, Eusebio; Alonso, Pedro L.; Chitnis, Chetan E.
2016-01-01
Cytoadhesion of Plasmodium falciparum infected erythrocytes to gC1qR has been associated with severe malaria, but the parasite ligand involved is currently unknown. To assess if binding to gC1qR is mediated through the P. falciparum erythrocyte membrane protein 1 (PfEMP1) family, we analyzed by static binding assays and qPCR the cytoadhesion and var gene transcriptional profile of 86 P. falciparum isolates from Mozambican children with severe and uncomplicated malaria, as well as of a P. falciparum 3D7 line selected for binding to gC1qR (Pf3D7gC1qR). Transcript levels of DC8 correlated positively with cytoadhesion to gC1qR (rho = 0.287, P = 0.007), were higher in isolates from children with severe anemia than with uncomplicated malaria, as well as in isolates from Europeans presenting a first episode of malaria (n = 21) than Mozambican adults (n = 25), and were associated with an increased IgG recognition of infected erythrocytes by flow cytometry. Pf3D7gC1qR overexpressed the DC8 type PFD0020c (5.3-fold transcript levels relative to Seryl-tRNA-synthetase gene) compared to the unselected line (0.001-fold). DBLβ12 from PFD0020c bound to gC1qR in ELISA-based binding assays and polyclonal antibodies against this domain were able to inhibit binding to gC1qR of Pf3D7gC1qR and four Mozambican P. falciparum isolates by 50%. Our results show that DC8-type PfEMP1s mediate binding to gC1qR through conserved surface epitopes in DBLβ12 domain which can be inhibited by strain-transcending functional antibodies. This study supports a key role for gC1qR in malaria-associated endovascular pathogenesis and suggests the feasibility of designing interventions against severe malaria targeting this specific interaction. PMID:27835682
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Feng; Miyakawa, Takuya; Kataoka, Michihiko
2014-04-18
Highlights: • Crystal structure of AtQR has been determined at 1.72 Å. • NADH binding induces the formation of substrate binding site. • AtQR possesses a conserved hydrophobic wall for stereospecific binding of substrate. • Additional Glu197 residue is critical to the high binding affinity. - Abstract: (R)-3-Quinuclidinol, a useful compound for the synthesis of various pharmaceuticals, can be enantioselectively produced from 3-quinuclidinone by 3-quinuclidinone reductase. Recently, a novel NADH-dependent 3-quinuclidionone reductase (AtQR) was isolated from Agrobacterium tumefaciens, and showed much higher substrate-binding affinity (>100 fold) than the reported 3-quinuclidionone reductase (RrQR) from Rhodotorula rubra. Here, we report the crystalmore » structure of AtQR at 1.72 Å. Three NADH-bound protomers and one NADH-free protomer form a tetrameric structure in an asymmetric unit of crystals. NADH not only acts as a proton donor, but also contributes to the stability of the α7 helix. This helix is a unique and functionally significant part of AtQR and is related to form a deep catalytic cavity. AtQR has all three catalytic residues of the short-chain dehydrogenases/reductases family and the hydrophobic wall for the enantioselective reduction of 3-quinuclidinone as well as RrQR. An additional residue on the α7 helix, Glu197, exists near the active site of AtQR. This acidic residue is considered to form a direct interaction with the amine part of 3-quinuclidinone, which contributes to substrate orientation and enhancement of substrate-binding affinity. Mutational analyses also support that Glu197 is an indispensable residue for the activity.« less
NASA Astrophysics Data System (ADS)
Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan
2015-10-01
In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akileswaran, L.; Brock, B.J.; Cereghino, J.L.
1999-02-01
A cDNA clone encoding a quinone reductase (QR) from the white rot basidiomycete Phanerochaete chrysosporium was isolated and sequenced. The cDNA consisted of 1,007 nucleotides and a poly(A) tail and encoded a deduced protein containing 271 amino acids. The experimentally determined eight-amino-acid N-germinal sequence of the purified QR protein from P. chrysosporium matched amino acids 72 to 79 of the predicted translation product of the cDNA. The M{sub r} of the predicted translation product, beginning with Pro-72, was essentially identical to the experimentally determined M{sub r} of one monomer of the QR dimer, and this finding suggested that QR ismore » synthesized as a proenzyme. The results of in vitro transcription-translation experiments suggested that QR is synthesized as a proenzyme with a 71-amino-acid leader sequence. This leader sequence contains two potential KEX2 cleavage sites and numerous potential cleavage sites for dipeptidyl aminopeptidase. The QR activity in cultures of P. chrysosporium increased following the addition of 2-dimethoxybenzoquinone, vanillic acid, or several other aromatic compounds. An immunoblot analysis indicated that induction resulted in an increase in the amount of QR protein, and a Northern blot analysis indicated that this regulation occurs at the level of the qr mRNA.« less
QR code based noise-free optical encryption and decryption of a gray scale image
NASA Astrophysics Data System (ADS)
Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-03-01
In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.
24 CFR 902.35 - Financial condition scoring and thresholds.
Code of Federal Regulations, 2013 CFR
2013-04-01
... subindicators of financial condition indicator are: (1) Quick Ratio (QR). The QR compares quick assets to... include inventory. Current liabilities are those liabilities that are due within the next 12 months. A QR...
24 CFR 902.35 - Financial condition scoring and thresholds.
Code of Federal Regulations, 2014 CFR
2014-04-01
... subindicators of financial condition indicator are: (1) Quick Ratio (QR). The QR compares quick assets to... include inventory. Current liabilities are those liabilities that are due within the next 12 months. A QR...
24 CFR 902.35 - Financial condition scoring and thresholds.
Code of Federal Regulations, 2012 CFR
2012-04-01
... subindicators of financial condition indicator are: (1) Quick Ratio (QR). The QR compares quick assets to... include inventory. Current liabilities are those liabilities that are due within the next 12 months. A QR...
24 CFR 902.35 - Financial condition scoring and thresholds.
Code of Federal Regulations, 2011 CFR
2011-04-01
... subindicators of financial condition indicator are: (1) Quick Ratio (QR). The QR compares quick assets to... include inventory. Current liabilities are those liabilities that are due within the next 12 months. A QR...
Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains
NASA Astrophysics Data System (ADS)
Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao
2017-11-01
A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
Technique for Solving Electrically Small to Large Structures for Broadband Applications
NASA Technical Reports Server (NTRS)
Jandhyala, Vikram; Chowdhury, Indranil
2011-01-01
Fast iterative algorithms are often used for solving Method of Moments (MoM) systems, having a large number of unknowns, to determine current distribution and other parameters. The most commonly used fast methods include the fast multipole method (FMM), the precorrected fast Fourier transform (PFFT), and low-rank QR compression methods. These methods reduce the O(N) memory and time requirements to O(N log N) by compressing the dense MoM system so as to exploit the physics of Green s Function interactions. FFT-based techniques for solving such problems are efficient for spacefilling and uniform structures, but their performance substantially degrades for non-uniformly distributed structures due to the inherent need to employ a uniform global grid. FMM or QR techniques are better suited than FFT techniques; however, neither the FMM nor the QR technique can be used at all frequencies. This method has been developed to efficiently solve for a desired parameter of a system or device that can include both electrically large FMM elements, and electrically small QR elements. The system or device is set up as an oct-tree structure that can include regions of both the FMM type and the QR type. The system is enclosed with a cube at a 0- th level, splitting the cube at the 0-th level into eight child cubes. This forms cubes at a 1st level, recursively repeating the splitting process for cubes at successive levels until a desired number of levels is created. For each cube that is thus formed, neighbor lists and interaction lists are maintained. An iterative solver is then used to determine a first matrix vector product for any electrically large elements as well as a second matrix vector product for any electrically small elements that are included in the structure. These matrix vector products for the electrically large and small elements are combined, and a net delta for a combination of the matrix vector products is determined. The iteration continues until a net delta is obtained that is within the predefined limits. The matrix vector products that were last obtained are used to solve for the desired parameter. The solution for the desired parameter is then presented to a user in a tangible form; for example, on a display.
Chamarthi, Bindu; Cincotta, Anthony H
2017-05-01
The concurrent use of an insulin sensitizer in type 2 diabetes mellitus (T2DM) patients with inadequate glycemic control on basal-bolus insulin may help improve glycemic control while limiting further insulin requirement. Bromocriptine-QR (B-QR), a quick release, sympatholytic, dopamine D2 receptor agonist therapy for T2DM, is a postprandial insulin sensitizer. This study evaluated the effect of B-QR on dysglycemia in T2DM subjects with suboptimal glycemic control on basal-bolus insulin plus metformin. The effect of once-daily morning administration of B-QR on dysglycemia was evaluated in 60 T2DM subjects derived from the Cycloset Safety Trial, with HbA1c >7% on basal-bolus insulin plus metformin at baseline, randomized to B-QR (N = 44) versus placebo (N = 16) and completed 12 weeks of study drug treatment. The analyses also included a subset of subjects on high-dose insulin (total daily insulin dose (TDID) ≥70 units; N = 36: 27 B-QR; 9 placebo). Subjects were well matched at baseline. After 12 weeks of B-QR treatment, mean % HbA1c decreased by -0.73% relative to baseline (p < 0.001) and by -1.13 relative to placebo (p < 0.001). In the high-dose insulin subset, B-QR therapy resulted in % HbA1c reductions of -0.95 and -1.49 relative to baseline (p < 0.001) and placebo (p = 0.001) respectively. Secondary analyses of treatment effect at 24 and 52 weeks demonstrated similar influences of B-QR on HbA1c. The fasting plasma glucose (FPG) and TDID changes within each treatment group were not significant. More subjects achieved HbA1c ≤7 at 12 weeks with B-QR relative to placebo (36.4% B-QR vs 0% placebo, Fisher's exact 2-sided p = 0.003 in the entire cohort and 37% vs 0%, 2-sided p = 0.039 in the high-dose insulin subset). B-QR therapy improves glycemic control in T2DM subjects whose glycemia is poorly controlled on metformin plus basal-bolus insulin, including individuals on high-dose basal-bolus insulin. This glycemic impact occurred without significant change in FPG, suggesting a postprandial glucose lowering mechanism of action. Cycloset Safety Trial registration: ClinicalTrials.gov Identifier: NCT00377676.
Oxidative stress and neurodegeneration: The possible contribution of quinone reductase 2.
Cassagnes, Laure-Estelle; Chhour, Monivan; Pério, Pierre; Sudor, Jan; Gayon, Régis; Ferry, Gilles; Boutin, Jean A; Nepveu, Françoise; Reybier, Karine
2018-05-20
There is increasing evidence that oxidative stress is involved in the etiology and pathogenesis of neurodegenerative disorders. Overproduction of reactive oxygen species (ROS) is due in part to the reactivity of catecholamines, such as dopamine, adrenaline, and noradrenaline. These molecules are rapidly converted, chemically or enzymatically, into catechol-quinone and then into highly deleterious semiquinone radicals after 1-electron reduction in cells. Notably, the overexpression of dihydronicotinamide riboside:quinone oxidoreductase (QR2) in Chinese hamster ovary (CHO) cells increases the production of ROS, mainly superoxide radicals, when it is exposed to exogenous catechol-quinones (e.g. dopachrome, aminochrome, and adrenochrome). Here we used electron paramagnetic resonance analysis to demonstrate that the phenomenon observed in CHO cells is also seen in human leukemic cells (K562 cells) that naturally express QR2. Moreover, by manipulating the level of QR2 in neuronal cells, including immortalized neuroblast cells and ex vivo neurons isolated from QR2 knockout animals, we showed that there is a direct relationship between QR2-mediated quinone reduction and ROS overproduction. Supporting this result, the withdraw of the QR2 co-factor (BNAH) or the addition of the specific QR2 inhibitor S29434 suppressed oxidative stress. Taken together, these data suggest that the overexpression of QR2 in brain cells in the presence of catechol quinones might lead to ROS-induced cell death via the rapid conversion of superoxide radicals into hydrogen peroxide and then into highly reactive hydroxyl radicals. Thus, QR2 may be implicated in the early stages of neurodegenerative disorders. Copyright © 2018 Elsevier Inc. All rights reserved.
Investigating the use of quick response codes in the gross anatomy laboratory.
Traser, Courtney J; Hoffman, Leslie A; Seifert, Mark F; Wilson, Adam B
2015-01-01
The use of quick response (QR) codes within undergraduate university courses is on the rise, yet literature concerning their use in medical education is scant. This study examined student perceptions on the usefulness of QR codes as learning aids in a medical gross anatomy course, statistically analyzed whether this learning aid impacted student performance, and evaluated whether performance could be explained by the frequency of QR code usage. Question prompts and QR codes tagged on cadaveric specimens and models were available for four weeks as learning aids to medical (n = 155) and doctor of physical therapy (n = 39) students. Each QR code provided answers to posed questions in the form of embedded text or hyperlinked web pages. Students' perceptions were gathered using a formative questionnaire and practical examination scores were used to assess potential gains in student achievement. Overall, students responded positively to the use of QR codes in the gross anatomy laboratory as 89% (57/64) agreed the codes augmented their learning of anatomy. The users' most noticeable objection to using QR codes was the reluctance to bring their smartphones into the gross anatomy laboratory. A comparison between the performance of QR code users and non-users was found to be nonsignificant (P = 0.113), and no significant gains in performance (P = 0.302) were observed after the intervention. Learners welcomed the implementation of QR code technology in the gross anatomy laboratory, yet this intervention had no apparent effect on practical examination performance. © 2014 American Association of Anatomists.
In vivo prophylactic activity of QR-435 against H3N2 influenza virus infection.
Oxford, John S; Lambkin, Robert; Guralnik, Mario; Rosenbloom, Richard A; Petteruti, Michael P; Digian, Kelly; LeFante, Carolyn
2007-01-01
Prophylaxis against influenza infection can take several forms, none of which is totally effective at preventing the spread of the disease. QR-435, an all-natural compound of green-tea extract and other agents, has been developed to protect against a range of viral infections, including the influenza subtype H3N2. Several different QR-435 formulations were tested against the two influenza A H3N2 viruses (A/Sydney/5/97 and A/Panama/2007/99) in the ferret model. Most experiments included negative (phosphate-buffered saline) and positive (oseltamivir 5 mg/kg, twice daily) controls. QR-435 and the control were administered 5 minutes after intranasal delivery of the virus as prophylaxis against infection resulting from exposure to infected but untreated ferrets and for prevention of transmission from infected and treated ferrets to untreated animals. Effects of QR-435 on seroconversion, virus shedding, and systemic sequelae of infection (weight loss, fever, reduced activity) were evaluated. QR-435 prevented transmission and provided prophylaxis against influenza virus H3N2. Prophylaxis with QR-435 was significantly more than with oseltamivir in these experiments. Optimal in vivo efficacy of QR-435 requires a horseradish concentration of at least 50% of that in the original formulation, and the benefits of this preparation appear to be dose dependent. QR-435 is effective for both prevention of H3N2 viral transmission and prophylaxis. These preclinical results warrant further evaluation of its prophylactic properties against avian influenza virus infection in humans.
Valensi, Paul; Le Devehat, Claude; Richard, Jean-Louis; Farez, Cherifo; Khodabandehlou, Taraneh; Rosenbloom, Richard A; LeFante, Carolyn
2005-01-01
QR-333, a topical compound that contains quercetin, a flavonoid with aldose reductase inhibitor effects, ascorbyl palmitate, and vitamin D(3), was formulated to decrease the oxidative stress that contributes to peripheral diabetic neuropathy and thus alleviate its symptoms. This proof-of-principle study assessed the efficacy and safety of QR-333 against placebo in a small cohort of patients with diabetic neuropathy. This randomized, placebo-controlled, double-blind trial included 34 men and women (21-71 years of age) with Type 1 or 2 diabetes and diabetic neuropathy who applied QR-333 or placebo (2:1 ratio), three times daily for 4 weeks, to each foot where symptoms were experienced. Five-point scales were used to determine changes from baseline to endpoint in symptoms and quality of life (efficacy). Safety was assessed through concomitant medications, adverse events, laboratory evaluations, and physical examinations. QR-333 reduced the severity of numbness, jolting pain, and irritation from baseline values. Improvements were also seen in overall and specific quality-of-life measures. QR-333 was well tolerated. Eleven patients in the QR-333 group reported 23 adverse events (all mild or moderate); 4 in the placebo group reported 5 events (all moderate). One patient who applied QR-333 noted a pricking sensation twice, the only adverse event considered possibly related to study treatment. From this preliminary safety study, it appears that QR-333 may safely offer relief of symptoms of diabetic neuropathy and improve quality of life. These findings warrant further investigation of this topical compound.
Optical encryption and QR codes: secure and noise-free information retrieval.
Barrera, John Fredy; Mira, Alejandro; Torroba, Roberto
2013-03-11
We introduce for the first time the concept of an information "container" before a standard optical encrypting procedure. The "container" selected is a QR code which offers the main advantage of being tolerant to pollutant speckle noise. Besides, the QR code can be read by smartphones, a massively used device. Additionally, QR code includes another secure step to the encrypting benefits the optical methods provide. The QR is generated by means of worldwide free available software. The concept development probes that speckle noise polluting the outcomes of normal optical encrypting procedures can be avoided, then making more attractive the adoption of these techniques. Actual smartphone collected results are shown to validate our proposal.
Analysis of facial motion patterns during speech using a matrix factorization algorithm
Lucero, Jorge C.; Munhall, Kevin G.
2008-01-01
This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records. PMID:19062866
Downdating a time-varying square root information filter
NASA Technical Reports Server (NTRS)
Muellerschoen, Ronald J.
1990-01-01
A new method to efficiently downdate an estimate and covariance generated by a discrete time Square Root Information Filter (SRIF) is presented. The method combines the QR factor downdating algorithm of Gill and the decentralized SRIF algorithm of Bierman. Efficient removal of either measurements or a priori information is possible without loss of numerical integrity. Moreover, the method includes features for detecting potential numerical degradation. Performance on a 300 parameter system with 5800 data points shows that the method can be used in real time and hence is a promising tool for interactive data analysis. Additionally, updating a time-varying SRIF filter with either additional measurements or a priori information proceeds analogously.
Sen, Gargi; Mukhopadhyay, Sibabrata; Ray, Manju; Biswas, Tuli
2008-05-01
The possibility of developing antileishmanial drugs was evaluated by intervention in the parasite's iron metabolism, utilizing quercetin (Qr) under in vivo conditions, and identifying the target of this lipophilic metal chelator against Leishmania donovani. Interaction between Qr and serum albumin (SA) was studied by using the intrinsic fluorescence of Qr as a probe. The effect of treatment with Qr and SA on the proliferation of amastigotes was determined by evaluating splenic parasite load. Disintegration of parasites in response to combination treatment was assessed from ultrastructural analysis using a transmission electron microscope. Quenching of the tyrosyl radical of ribonucleotide reductase (RR) in treated amastigotes was detected by an electron paramagnetic resonance study. Treatment with a combination of Qr and SA increased bioavailability of the flavonoid and proved to be of major advantage in promoting the effectiveness of Qr towards the repression of splenic parasite load from 75%, P < 0.01 to 95%, P < 0.002. Qr-mediated down-regulation of RR (P < 0.05), catalysing the rate-limiting step of DNA synthesis in the pathogens, could be related to the deprivation of the enzyme of iron which in turn destabilized the critical tyrosyl radical required for its catalysing activity. Results have implications for improved leishmanicidal action of Qr in combination with SA targeting RR and suggest future drug design based on interference with the parasite's iron metabolism under in vivo conditions.
Loss of quinone reductase 2 function selectively facilitates learning behaviors.
Benoit, Charles-Etienne; Bastianetto, Stephane; Brouillette, Jonathan; Tse, YiuChung; Boutin, Jean A; Delagrange, Philippe; Wong, TakPan; Sarret, Philippe; Quirion, Rémi
2010-09-22
High levels of reactive oxygen species (ROS) are associated with deficits in learning and memory with age as well as in Alzheimer's disease. Using DNA microarray, we demonstrated the overexpression of quinone reductase 2 (QR2) in the hippocampus in two models of learning deficits, namely the aged memory impaired rats and the scopolamine-induced amnesia model. QR2 is a cytosolic flavoprotein that catalyzes the reduction of its substrate and enhances the production of damaging activated quinone and ROS. QR2-like immunostaining is enriched in cerebral structures associated with learning behaviors, such as the hippocampal formation and the temporofrontal cortex of rat, mouse, and human brains. In cultured rat embryonic hippocampal neurons, selective inhibitors of QR2, namely S26695 and S29434, protected against menadione-induced cell death by reversing its proapoptotic action. S26695 (8 mg/kg) also significantly inhibited scopolamine-induced amnesia. Interestingly, adult QR2 knock-out mice demonstrated enhanced learning abilities in various tasks, including Morris water maze, object recognition, and rotarod performance test. Other behaviors related to anxiety (elevated plus maze), depression (forced swim), and schizophrenia (prepulse inhibition) were not affected in QR2-deficient mice. Together, these data suggest a role for QR2 in cognitive behaviors with QR2 inhibitors possibly representing a novel therapeutic strategy toward the treatment of learning deficits especially observed in the aged brain.
Fast parallel algorithm for slicing STL based on pipeline
NASA Astrophysics Data System (ADS)
Ma, Xulong; Lin, Feng; Yao, Bo
2016-05-01
In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
Zinc induces exposure of hydrophobic sites in the C-terminal domain of gC1q-R/p33.
Kumar, Rajeev; Peerschke, Ellinor I B; Ghebrehiwet, Berhane
2002-09-01
Endothelial cells and platelets are known to express gC1q-R on their surface. In addition to C1q, endothelial cell gC1q-R has been shown to bind high molecular weight kininogen (HK) and factor XII (FXII). However, unlike C1q, whose interaction with gC1q-R does not require divalent ions, the binding of HK to gC1q-R is absolutely dependent on the presence of zinc. However, the mechanism by which zinc modulates this interaction is not fully understood. To investigate the role of zinc, binding studies were done using the hydrophobic dye, bis-ANS. The fluorescence intensity of bis-ANS, greatly increases and the emission maximum is blue-shifted from 525 to 485nm upon binding to hydrophobic sites on proteins. In this report, we show that a blue-shift in emission maximum is also observed when bis-ANS binds to gC1q-R in the presence but not in the absence of zinc suggesting that zinc induces exposure of hydrophobic sites in the molecule. The binding of bis-ANS to gC1q-R is specific, dose-dependent, and reversible. In the presence of zinc, this binding is abrogated by monoclonal antibody 74.5.2 directed against gC1q-R residues 204-218. This segment of gC1q-R, which corresponds to the beta6 strand in the crystal structure, has been shown previously to be the binding site for HK. A similar trend in zinc-induced gC1q-R binding was also observed using the hydrophobic matrix octyl-Sepharose. Taken together, our data suggest that zinc can induce the exposure of hydrophobic sites in the C-terminal domain of gC1q-R involved in binding to HK/FXII.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calamini, Barbara; Santarsiero, Bernard D.; Boutin, Jean A.
Melatonin exerts its biological effects through at least two transmembrane G-protein-coupled receptors, MT1 and MT2, and a lower-affinity cytosolic binding site, designated MT3. MT3 has recently been identified as QR2 (quinone reductase 2) (EC 1.10.99.2) which is of significance since it links the antioxidant effects of melatonin to a mechanism of action. Initially, QR2 was believed to function analogously to QR1 in protecting cells from highly reactive quinones. However, recent studies indicate that QR2 may actually transform certain quinone substrates into more highly reactive compounds capable of causing cellular damage. Therefore it is hypothesized that inhibition of QR2 in certainmore » cases may lead to protection of cells against these highly reactive species. Since melatonin is known to inhibit QR2 activity, but its binding site and mode of inhibition are not known, we determined the mechanism of inhibition of QR2 by melatonin and a series of melatonin and 5-hydroxytryptamine (serotonin) analogues, and we determined the X-ray structures of melatonin and 2-iodomelatonin in complex with QR2 to between 1.5 and 1.8 {angstrom} (1 {angstrom} = 0.1 nm) resolution. Finally, the thermodynamic binding constants for melatonin and 2-iodomelatonin were determined by ITC (isothermal titration calorimetry). The kinetic results indicate that melatonin is a competitive inhibitor against N-methyldihydronicotinamide (K{sub i} = 7.2 {mu}M) and uncompetitive against menadione (K{sub i} = 92 {mu}M), and the X-ray structures shows that melatonin binds in multiple orientations within the active sites of the QR2 dimer as opposed to an allosteric site. These results provide new insights into the binding mechanisms of melatonin and analogues to QR2.« less
Hong, Xia
2006-07-01
In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.
Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Brouwer, Randall Jay
1991-01-01
The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.
NASA Technical Reports Server (NTRS)
Heidergott, K. W.
1979-01-01
The computer program known as QR is described. Classical control systems analysis and synthesis (root locus, time response, and frequency response) can be performed using this program. Programming details of the QR program are presented.
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
NASA Astrophysics Data System (ADS)
Lopez, Patricia; Verkade, Jan; Weerts, Albrecht; Solomatine, Dimitri
2014-05-01
Hydrological forecasting is subject to many sources of uncertainty, including those originating in initial state, boundary conditions, model structure and model parameters. Although uncertainty can be reduced, it can never be fully eliminated. Statistical post-processing techniques constitute an often used approach to estimate the hydrological predictive uncertainty, where a model of forecast error is built using a historical record of past forecasts and observations. The present study focuses on the use of the Quantile Regression (QR) technique as a hydrological post-processor. It estimates the predictive distribution of water levels using deterministic water level forecasts as predictors. This work aims to thoroughly verify uncertainty estimates using the implementation of QR that was applied in an operational setting in the UK National Flood Forecasting System, and to inter-compare forecast quality and skill in various, differing configurations of QR. These configurations are (i) 'classical' QR, (ii) QR constrained by a requirement that quantiles do not cross, (iii) QR derived on time series that have been transformed into the Normal domain (Normal Quantile Transformation - NQT), and (iv) a piecewise linear derivation of QR models. The QR configurations are applied to fourteen hydrological stations on the Upper Severn River with different catchments characteristics. Results of each QR configuration are conditionally verified for progressively higher flood levels, in terms of commonly used verification metrics and skill scores. These include Brier's probability score (BS), the continuous ranked probability score (CRPS) and corresponding skill scores as well as the Relative Operating Characteristic score (ROCS). Reliability diagrams are also presented and analysed. The results indicate that none of the four Quantile Regression configurations clearly outperforms the others.
A novel use of QR code stickers after orthopaedic cast application.
Gough, A T; Fieraru, G; Gaffney, Pav; Butler, M; Kincaid, R J; Middleton, R G
2017-07-01
INTRODUCTION We present a novel solution to ensure that information and contact details are always available to patients while in cast. An information sticker containing both telephone numbers and a Quick Response (QR) code is applied to the cast. When scanned with a smartphone, the QR code loads the plaster team's webpage. This contains information and videos about cast care, complications and enhancing recovery. METHODS A sticker was designed and applied to all synthetic casts fitted in our fracture clinic. On cast removal, patients completed a questionnaire about the sticker. A total of 101 patients were surveyed between November 2015 and February 2016. The questionnaire comprised ten binary choice questions. RESULTS The vast majority (97%) of patients had the sticker still on their cast when they returned to clinic for cast removal. Eighty-four per cent of all patients felt reassured by the presence of the QR code sticker. Nine per cent used the contact details on the cast to seek advice. Over half (56%) had a smartphone and a third (33%) of these scanned the QR code. Of those who scanned the code, 95% found the information useful. CONCLUSIONS This study indicates that use of a QR code reassures patients and is an effective tool in the proactive management of potential cast problems. The QR code sticker is now applied to all casts across our trust. In line with NHS England's Five Year Forward View calling for enhanced use of smartphone technology, our trust is continuing to expand its portfolio of patient information accessible via QR codes. Other branches of medicine may benefit from incorporating QR codes as portals to access such information.
Impact erosion model for gravity-dominated planetesimals
NASA Astrophysics Data System (ADS)
Genda, Hidenori; Fujita, Tomoaki; Kobayashi, Hiroshi; Tanaka, Hidekazu; Suetsugu, Ryo; Abe, Yutaka
2017-09-01
Disruptive collisions have been regarded as an important process for planet formation, while non-disruptive, small-scale collisions (hereafter called erosive collisions) have been underestimated or neglected by many studies. However, recent studies have suggested that erosive collisions are also important to the growth of planets, because they are much more frequent than disruptive collisions. Although the thresholds of the specific impact energy for disruptive collisions (QRD*) have been investigated well, there is no reliable model for erosive collisions. In this study, we systematically carried out impact simulations of gravity-dominated planetesimals for a wide range of specific impact energy (QR) from disruptive collisions (QR ∼ QRD*) to erosive ones (QR << QRD*) using the smoothed particle hydrodynamics method. We found that the ejected mass normalized by the total mass (Mej/Mtot) depends on the numerical resolution, the target radius (Rtar) and the impact velocity (vimp), as well as on QR, but that it can be nicely scaled by QRD* for the parameter ranges investigated (Rtar = 30-300 km, vimp = 2-5 km/s). This means that Mej/Mtot depends only on QR/QRD* in these parameter ranges. We confirmed that the collision outcomes for much less erosive collisions (QR < 0.01 QRD*) converge to the results of an impact onto a planar target for various impact angles (θ) and that Mej/Mtot ∝ QR/QRD* holds. For disruptive collisions (QR ∼ QRD*), the curvature of the target has a significant effect on Mej/Mtot. We also examined the angle-averaged value of Mej/Mtot and found that the numerically obtained relation between angle-averaged Mej/Mtot and QR/QRD* is very similar to the cases for θ = 45° impacts. We proposed a new erosion model based on our numerical simulations for future research on planet formation with collisional erosion.
Short rest between shift intervals increases the risk of sick leave: a prospective registry study.
Vedaa, Øystein; Pallesen, Ståle; Waage, Siri; Bjorvatn, Bjørn; Sivertsen, Børge; Erevik, Eilin; Svensen, Erling; Harris, Anette
2017-07-01
The purpose of this study was to use objective registry data to prospectively investigate the effects of quick returns (QR, <11 hours of rest between shifts) and night shifts on sick leave. A total of 1538 nurses (response rate =41.5%) answered questionnaires on demographics and personality and provided consent to link this information to registry data on shift work and sick leave from employers' records. A multilevel negative binomial model was used to investigate the predictive effect of exposure to night shifts and QR every month for 1 year, on sick leave the following month. Exposure to QR the previous month increased the risk for sick leave days (incidence rate ratio (IRR)=1.066, 95% CI 1.022 to 1.108, p<0.01) and sick leave spells (IRR=1.059, 95% CI 1.025 to 1.097, p<0.001) the following month, whereas night shifts did not. 83% per cent of the nurses experienced QR within a year, and on average they were exposed to 3.0 QR per month (SD=1.6). Personality characteristics associated with shift work tolerance (low on morningness, low on languidity and high on flexibility) were not associated with sick leave, and did not moderate the relationship between QR and sick leave. We found a positive linear relationship between QR and sick leave. Avoiding QR may help reduce workers' sick leave. The restricted recovery opportunity associated with QR may give little room for beneficial effects of individual characteristics usually associated with shift work tolerance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong
2018-07-01
We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.
Hou, D X; Fukuda, M; Fujii, M; Fuke, Y
2000-12-20
Wasabi is a very popular pungent spice in Japan. This study examined the ability of 6-(methylsufinyl)hexyl isothiocyanate (6-MITC), an active principle of wasabi, to induce the cellular expression of nicotinamide adenine dinucleotide phosphate: quinone oxidoreductase (QR) in Hepa 1c1c7 cells. The cells were treated with various concentrations of 6-MITC, and were then assessed for cell growth, QR activity and QR mRNA expression. The induction of QR activity and QR mRNA expression was time- and dose-responsive over a narrow range of 0.1-5 microM, with declining induction at higher concentrations due to cell toxicity. Furthermore, transfection studies demonstrated that the induction of transcription of the QR gene by 6-MITC involved an antioxidant/electrophile-responsive element (ARE/EpRE) activation. Our results suggest a novel mechanism by which dietary wasabi 6-MITC may be implicated in cancer chemoprevention.
Improving performance of channel equalization in RSOA-based WDM-PON by QR decomposition.
Li, Xiang; Zhong, Wen-De; Alphones, Arokiaswami; Yu, Changyuan; Xu, Zhaowen
2015-10-19
In reflective semiconductor optical amplifier (RSOA)-based wavelength division multiplexed passive optical network (WDM-PON), the bit rate is limited by low modulation bandwidth of RSOAs. To overcome the limitation, we apply QR decomposition in channel equalizer (QR-CE) to achieve successive interference cancellation (SIC) for discrete Fourier transform spreading orthogonal frequency division multiplexing (DFT-S OFDM) signal. Using an RSOA with a 3-dB modulation bandwidth of only ~800 MHz, we experimentally demonstrate a 15.5-Gb/s over 20-km SSMF DFT-S OFDM transmission with QR-CE. The experimental results show that DFTS-OFDM with QR-CE attains much better BER performance than DFTS-OFDM and OFDM with conventional channel equalizers. The impacts of several parameters on QR-CE are investigated. It is found that 2 sub-bands in one OFDM symbol and 1 pilot in each sub-band are sufficient to achieve optimal performance and maintain the high spectral efficiency.
NASA Astrophysics Data System (ADS)
Markman, A.; Javidi, B.
2016-06-01
Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.
Quick Response codes for surgical safety: a prospective pilot study.
Dixon, Jennifer L; Smythe, William Roy; Momsen, Lara S; Jupiter, Daniel; Papaconstantinou, Harry T
2013-09-01
Surgical safety programs have been shown to reduce patient harm; however, there is variable compliance. The purpose of this study is to determine if innovative technology such as Quick Response (QR) codes can facilitate surgical safety initiatives. We prospectively evaluated the use of QR codes during the surgical time-out for 40 operations. Feasibility and accuracy were assessed. Perceptions of the current time-out process and the QR code application were evaluated through surveys using a 5-point Likert scale and binomial yes or no questions. At baseline (n = 53), survey results from the surgical team agreed or strongly agreed that the current time-out process was efficient (64%), easy to use (77%), and provided clear information (89%). However, 65% of surgeons felt that process improvements were needed. Thirty-seven of 40 (92.5%) QR codes scanned successfully, of which 100% were accurate. Three scan failures resulted from excessive curvature or wrinkling of the QR code label on the body. Follow-up survey results (n = 33) showed that the surgical team agreed or strongly agreed that the QR program was clearer (70%), easier to use (57%), and more accurate (84%). Seventy-four percent preferred the QR system to the current time-out process. QR codes accurately transmit patient information during the time-out procedure and are preferred to the current process by surgical team members. The novel application of this technology may improve compliance, accuracy, and outcomes. Copyright © 2013 Elsevier Inc. All rights reserved.
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
40 CFR Appendixes Q-R to Part 51 - [Reserved
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 2 2013-07-01 2013-07-01 false [Reserved] Q Appendixes Q-R to Part 51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Appendixes Q-R to Part 51 [Reserved] ...
40 CFR Appendixes Q-R to Part 51 - [Reserved
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 2 2014-07-01 2014-07-01 false [Reserved] Q Appendixes Q-R to Part 51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Appendixes Q-R to Part 51 [Reserved] ...
40 CFR Appendixes Q-R to Part 51 - [Reserved
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 2 2010-07-01 2010-07-01 false [Reserved] Q Appendixes Q-R to Part 51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Appendixes Q-R to Part 51 [Reserved] ...
40 CFR Appendixes Q-R to Part 51 - [Reserved
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 2 2011-07-01 2011-07-01 false [Reserved] Q Appendixes Q-R to Part 51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Appendixes Q-R to Part 51 [Reserved] ...
40 CFR Appendixes Q-R to Part 51 - [Reserved
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 2 2012-07-01 2012-07-01 false [Reserved] Q Appendixes Q-R to Part 51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS REQUIREMENTS FOR PREPARATION, ADOPTION, AND SUBMITTAL OF IMPLEMENTATION PLANS Appendixes Q-R to Part 51 [Reserved] ...
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
A class of parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.
Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook
2012-11-20
A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
QR code optical encryption using spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.
2017-02-01
Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129 × 129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.
What sets the minimum tokamak scrape-off layer width?
NASA Astrophysics Data System (ADS)
Joseph, Ilon
2016-10-01
The heat flux width of the tokamak scrape-off layer is on the order of the poloidal ion gyroradius, but the ``heuristic drift'' physics model is still not completely understood. In the absence of anomalous transport, neoclassical transport sets the minimum width. For plateau collisionality, the ion temperature width is set by qρi , while the electron temperature width scales as the geometric mean q(ρeρi) 1 / 2 and is close to qρi in magnitude. The width is enhanced because electrons are confined by the sheath potential and have a much longer time to radially diffuse before escaping to the wall. In the Pfirsch-Schluter regime, collisional diffusion increases the width by the factor (qR / λ) 1 / 2 where qR is the connection length and λ is the mean free path. This qualitatively agrees with the observed transition in the scaling law for detached plasmas. The radial width of the SOL electric field is determined by Spitzer parallel and ``neoclassical'' radial electric conductivity and has a similar scaling to that for thermal transport. Prepared under US DOE contract DE-AC52-07NA27344.
Introducing parallelism to histogramming functions for GEM systems
NASA Astrophysics Data System (ADS)
Krawczyk, Rafał D.; Czarski, Tomasz; Kolasinski, Piotr; Pozniak, Krzysztof T.; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article is an assessment of potential parallelization of histogramming algorithms in GEM detector system. Histogramming and preprocessing algorithms in MATLAB were analyzed with regard to adding parallelism. Preliminary implementation of parallel strip histogramming resulted in speedup. Analysis of algorithms parallelizability is presented. Overview of potential hardware and software support to implement parallel algorithm is discussed.
Mechanism and prognostic role of qR in V1 in patients with pulmonary arterial hypertension.
Waligóra, Marcin; Kopeć, Grzegorz; Jonas, Kamil; Tyrka, Anna; Sarnecka, Agnieszka; Miszalski-Jamka, Tomasz; Urbańczyk-Zawadzka, Małgorzata; Podolec, Piotr
The presence of qR pattern in lead V 1 of the 12-lead surface ECG has been proposed as a risk marker of death in patients with pulmonary arterial hypertension (PAH). We aimed to validate these findings in the modern era of PAH treatment and additionally to assess the relation of qR in V 1 to PAH severity. We also investigated the possible mechanisms underlying this ECG sign. Consecutive patients with PAH excluding patients with congenital heart defect were recruited between February 2008 and January 2016. A 12-lead standard ECG was acquired and analyzed for the presence of qR in V 1 and other potential prognostic patterns. Cardiac magnetic resonance and echocardiography were used for structural (masses and volumes) and functional (ejection fraction, eccentricity index) characterization of left (LV) and right (RV) ventricles. Standard markers of PAH severity were also assessed. We enrolled 66 patients (19 males), aged 50.0±15.7years with idiopathic PAH (n=52) and PAH associated with connective tissue disease (n=14). qR in V 1 was present in 26(39.4%) patients and was associated with worse functional capacity, hemodynamics and RV function. The main structural determinants of qR in V 1 were RV to LV volume ratio (OR: 3.99; 95% CI: 1.47-10.8, p=0.007) and diastolic eccentricity index (OR: 15.0; 95% CI: 1.29-175.5, p=0.03). During observation time of 30.5±19.4months, 20 (30.3%) patient died, 13 (50%) patients with qR and 7 (17.5%) patients without qR pattern. Electrocardiographic determinants of survival were qR (HR: 3.06, 95% CI: 1.21-7.4; p=0.02) and QRS duration (HR: 1.02, 95% CI: 1.01-1.04; p=0.01). Presence of qR in V 1 reflects RV dilation and diastolic interventricular septum flattening. It is a sign of advanced PAH and predicts the risk of death in this population. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Qin, Cheng-Zhi; Zhan, Lijun
2012-06-01
As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.
NASA Technical Reports Server (NTRS)
Luke, Edward Allen
1993-01-01
Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.
Mitrousia, Georgia K.; Sidique, Siti Nordahliawate M.; Qi, Aiming; Fitt, Bruce D. L.
2018-01-01
Using cultivar resistance against pathogens is one of the most economical and environmentally friendly methods for control of crop diseases. However, cultivar resistance can be easily rendered ineffective due to changes in pathogen populations or environments. To test the hypothesis that combining R gene-mediated resistance and quantitative resistance (QR) in one cultivar can provide more effective resistance than use of either type of resistance on its own, effectiveness of resistance in eight oilseed rape (Brassica napus) cultivars with different R genes and/or QR against Leptosphaeria maculans (phoma stem canker) was investigated in 13 different environments/sites over three growing seasons (2010/2011, 2011/2012 and 2012/2013). Cultivar Drakkar with no R genes and no QR was used as susceptible control and for sampling L. maculans populations. Isolates of L. maculans were obtained from the 13 sites in 2010/2011 to assess frequencies of avirulent alleles of different effector genes (AvrLm1, AvrLm4 or AvrLm7) corresponding to the resistance genes (Rlm1, Rlm4 or Rlm7) used in the field experiments. Results of field experiments showed that cultivars DK Cabernet (Rlm1 + QR) and Adriana (Rlm4 + QR) had significantly less severe phoma stem canker than cultivars Capitol (Rlm1) and Bilbao (Rlm4), respectively. Results of controlled environment experiments confirmed the presence of Rlm genes and/or QR in these four cultivars. Analysis of L. maculans populations from different sites showed that the mean frequencies of AvrLm1 (10%) and AvrLm4 (41%) were less than that of AvrLm7 (100%), suggesting that Rlm1 and Rlm4 gene-mediated resistances were partially rendered ineffective while Rlm7 resistance was still effective. Cultivar Excel (Rlm7 + QR) had less severe canker than cultivar Roxet (Rlm7), but the difference between them was not significant due to influence of the effective resistance gene Rlm7. For the two cultivars with only QR, Es-Astrid (QR) had less severe stem canker than NK Grandia (QR). Analysis of the relationship between severity of stem canker and weather data among the 13 sites in the three growing seasons showed that increased severity of stem canker was associated with increased rainfall during the phoma leaf spot development stage and increased temperature during the stem canker development stage. Further analysis of cultivar response to environmental factors showed that cultivars with both an Rlm gene and QR (e.g. DK Cabernet, Adriana and Excel) were less sensitive to a change in environment than cultivars with only Rlm genes (e.g. Capitol, Bilbao) or only QR (e.g. DK Grandia). These results suggest that combining R gene and QR can provide effective, stable control of phoma stem canker in different environments. PMID:29791484
Quick Response (QR) Codes for Audio Support in Foreign Language Learning
ERIC Educational Resources Information Center
Vigil, Kathleen Murray
2017-01-01
This study explored the potential benefits and barriers of using quick response (QR) codes as a means by which to provide audio materials to middle-school students learning Spanish as a foreign language. Eleven teachers of Spanish to middle-school students created transmedia materials containing QR codes linking to audio resources. Students…
Siderits, Richard; Yates, Stacy; Rodriguez, Arelis; Lee, Tina; Rimmer, Cheryl; Roche, Mark
2011-01-01
Quick Response (QR) Codes are standard in supply management and seen with increasing frequency in advertisements. They are now present regularly in healthcare informatics and education. These 2-dimensional square bar codes, originally designed by the Toyota car company, are free of license and have a published international standard. The codes can be generated by free online software and the resulting images incorporated into presentations. The images can be scanned by "smart" phones and tablets using either the iOS or Android platforms, which link the device with the information represented by the QR code (uniform resource locator or URL, online video, text, v-calendar entries, short message service [SMS] and formatted text). Once linked to the device, the information can be viewed at any time after the original presentation, saved in the device or to a Web-based "cloud" repository, printed, or shared with others via email or Bluetooth file transfer. This paper describes how we use QR codes in our tumor board presentations, discusses the benefits, the different QR codes from Web links and how QR codes facilitate the distribution of educational content.
He, Yangyong; Cai, Zeying; Shao, Jian; Xu, Li; She, Limin; Zheng, Yue; Zhong, Dingyong
2018-05-03
The self-assembly behavior of quaterrylene (QR) molecules on Ag(111) surfaces has been investigated by scanning tunneling microscopy (STM) and density functional theory (DFT) calculations. It is found that the QR molecules are highly mobile on the Ag(111) surface at 78 K. No ordered assembled structure is formed on the surface with a sub-monolayer coverage up to 0.8 monolayer due to the intermolecular repulsive interactions, whereas ordered molecular structures are observed at one monolayer coverage. According to our DFT calculations, charge transfer occurs between the substrate and the adsorbed QR molecule. As a result, out-of-plane dipoles appear at the interface, which are ascribed to the repulsive dipole-dipole interactions between the QR molecules. Furthermore, due to the planar geometry, the QR molecules exhibit relatively low diffusion barriers on Ag(111). By applying a voltage pulse between the tunneling gap, immobilization and aggregation of QR molecules take place, resulting in the formation of a triangle-shaped trimer. Our work demonstrates the ability of manipulating intermolecular repulsive and attractive interactions at the single molecular level.
Yuan, Yonglei; Ji, Long; Luo, Liping; Lu, Juan; Ma, Xiaoqiong; Ma, Zhongjun; Chen, Zhe
2012-12-01
In the present study, it was demonstrated that the petroleum extract of Andrographis paniculata (AP) had quinone reductase (QR) inducing activity, which might be attributed to the modification of key cysteine residues in Keap1 by Michael addition acceptors (MAAs) in it. To screen MAAs in AP, glutathione (GSH) was employed, and a LC/MS/MS method was implied. Three compounds, andrographoside, andrographolide, 14-deoxy-14,15-dehydroandrographolide were revealed could well conjugated with GSH. Then, andrographolide along with 4 new and 14 known compounds were isolated to conduct QR induction evaluation, and the CD (the concentration required to double the activity of QR) value of andrographolide is 1.43μM. The QR induce activity of andrographolide might be attributed to its targeting multiple cysteine residues in Keap1, therefore, the alkylation of Keap1 by andrographolide was further studied and the result showed that four cysteine residues: Cys77, Cys151, Cys273 and Cys368 were alkylated, which indicated that Keap1 is a potential target for the QR induce activity of andrographolide. Copyright © 2012 Elsevier B.V. All rights reserved.
Parallel Lattice Basis Reduction Using a Multi-threaded Schnorr-Euchner LLL Algorithm
NASA Astrophysics Data System (ADS)
Backes, Werner; Wetzel, Susanne
In this paper, we introduce a new parallel variant of the LLL lattice basis reduction algorithm. Our new, multi-threaded algorithm is the first to provide an efficient, parallel implementation of the Schorr-Euchner algorithm for today’s multi-processor, multi-core computer architectures. Experiments with sparse and dense lattice bases show a speed-up factor of about 1.8 for the 2-thread and about factor 3.2 for the 4-thread version of our new parallel lattice basis reduction algorithm in comparison to the traditional non-parallel algorithm.
NASA Technical Reports Server (NTRS)
Fijany, Amir
1993-01-01
In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
Braun, L; Ghebrehiwet, B; Cossart, P
2000-04-03
InlB is a Listeria monocytogenes protein that promotes entry of the bacterium into mammalian cells by stimulating tyrosine phosphorylation of the adaptor proteins Gab1, Cbl and Shc, and activation of phosphatidyl- inositol (PI) 3-kinase. Using affinity chromatography and enzyme-linked immunosorbent assay, we demonstrate a direct interaction between InlB and the mammalian protein gC1q-R, the receptor of the globular part of the complement component C1q. Soluble C1q or anti-gC1q-R antibodies impair InlB-mediated entry. Transient transfection of GPC16 cells, which are non-permissive to InlB-mediated entry, with a plasmid-expressing human gC1q-R promotes entry of InlB-coated beads. Furthermore, several experiments indicate that membrane recruitment and activation of PI 3-kinase involve an InlB-gC1q-R interaction and that gC1q-R associates with Gab1 upon stimulation of Vero cells with InlB. Thus, gC1q-R constitutes a cellular receptor involved in InlB-mediated activation of PI 3-kinase and tyrosine phosphorylation of the adaptor protein Gab1. After E-cadherin, the receptor for internalin, gC1q-R is the second identified mammalian receptor promoting entry of L. monocytogenes into mammalian cells.
DNA barcode goes two-dimensions: DNA QR code web server.
Liu, Chang; Shi, Linchun; Xu, Xiaolan; Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, "DNA barcode" actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications.
An In vitro evaluation of the reliability of QR code denture labeling technique.
Poovannan, Sindhu; Jain, Ashish R; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R
2016-01-01
Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method.
Kucher, Nils; Walpoth, Nazan; Wustmann, Kerstin; Noveanu, Markus; Gertsch, Marc
2003-06-01
To test the hypothesis that Qr in V(1)is a predictor of pulmonary embolism, right ventricular strain, and adverse clinical outcome. ECG's from 151 patients with suspected pulmonary embolism were blindly interpreted by two observers. Echocardiography, troponin I, and pro-brain natriuretic peptide levels were obtained in 75 patients with pulmonary embolism. Qr in V(1)(14 vs 0 in controls; p<0.0001) and ST elevation in V(1)> or =1 mV (15 vs 1 in controls; p=0.0002) were more frequently present in patients with pulmonary embolism. Sensitivity and specificity of Qr in V(1)and T wave inversion in V(2)for predicting right ventricular dysfunction were 31/97% and 45/94%, respectively. Three of five patients who died in-hospital and 11 of 20 patients with a complicated course, presented with Qr in V(1). After adjustment for right ventricular strain including ECG, echocardiography, pro-brain natriuretic peptide and troponin I levels, Qr in V(1)(OR 8.7, 95%CI 1.4-56.7; p=0.02) remained an independent predictor of adverse outcome. Among the ECG signs seen in patients with acute pulmonary embolism, Qr in V(1)is closely related to the presence of right ventricular dysfunction, and is an independent predictor of adverse clinical outcome.
YE, MENG-FEI; LIU, ZHENG; LOU, SHU-FANG; CHEN, ZHEN-YONG; YU, AI-YUE; LIU, CHUN-YAN; YU, CHAO-YANG; ZHANG, HUA-FANG; ZHANG, JIAN
2015-01-01
Flos albiziae (FA) is reportedly used for treatment of insomnia and anxiety in traditional medicine. The hypnotic effect of an extract of FA (FAE) and its constituent quercetin [2-(3,4-dihydroxyphenyl)-3,5,7-trihydroxy-4H-chromen-4-one, QR] was examined in mice. QR is a widely distributed natural flavonoid abundant in FA flowers and other tissues. The possible mechanisms underlying the hypnotic effects of FAE and QR were investigated using behavioral pharmacology. FAE and QR significantly potentiated pentobarbital-induced [50 mg/kg, intraperitoneal (ip)] sleep (prolonged sleeping time; shortened sleep latency) in a dose-dependent manner, and these effects were augmented by administration of 5-hydroxytryptophan (5-HTP), a precursor of 5-hydroxytryptamine. With a sub-hypnotic dose of pentobarbital (28 mg/kg, ip), FAE and QR significantly increased the rate of sleep onset and were synergistic with 5-HTP (2.5 mg/kg, ip). Pretreatment with p-chlorophenylalanine, an inhibitor of tryptophan hydroxylase, significantly decreased sleeping time and prolonged sleep latency in pentobarbital-treated mice, whereas FAE and QR significantly reversed this effect. Data show that FAE and QR have hypnotic activity, possibly mediated by the serotonergic system. The present study offers a rationale for the use of FA in treating sleep disorders associated with serotonin system dysfunction. PMID:26623026
Multitasking TORT under UNICOS: Parallel performance models and measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, A.; Azmy, Y.Y.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azmy, Y.Y.; Barnett, D.A.
1999-09-27
The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.
Pre-Service Teachers' Perception of Quick Response (QR) Code Integration in Classroom Activities
ERIC Educational Resources Information Center
Ali, Nagla; Santos, Ieda M.; Areepattamannil, Shaljan
2017-01-01
Quick Response (QR) codes have been discussed in the literature as adding value to teaching and learning. Despite their potential in education, more research is needed to inform practice and advance knowledge in this field. This paper investigated the integration of the QR code in classroom activities and the perceptions of the integration by…
A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.
van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum
2015-09-01
QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.
Supporting Situated Learning Based on QR Codes with Etiquetar App: A Pilot Study
ERIC Educational Resources Information Center
Camacho, Miguel Olmedo; Pérez-Sanagustín, Mar; Alario-Hoyos, Carlos; Soldani, Xavier; Kloos, Carlos Delgado; Sayago, Sergio
2014-01-01
EtiquetAR is an authoring tool for supporting the design and enactment of situated learning experiences based on QR tags. Practitioners use etiquetAR for creating, managing and personalizing collections of QR codes with special properties: (1) codes can have more than one link pointing at different multimedia resources, (2) codes can be updated…
ERIC Educational Resources Information Center
Yip, Tor; Melling, Louise; Shaw, Kirsty J.
2016-01-01
An online instructional database containing information on commonly used pieces of laboratory equipment was created. In order to make the database highly accessible and to promote its use, QR codes were utilized. The instructional materials were available anytime and accessed using QR codes located on the equipment itself and within undergraduate…
Blending Classroom Teaching and Learning with QR Codes
ERIC Educational Resources Information Center
Rikala, Jenni; Kankaanranta, Marja
2014-01-01
The aim of this case study was to explore the feasibility of the Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The interest was especially to explore how mobile devices and QR codes can enhance and blend teaching and learning. The data were collected with a teacher interview and pupil surveys. The learning…
The Role of Qualitative Research Methods in Discrete Choice Experiments
Vass, Caroline; Rigby, Dan; Payne, Katherine
2017-01-01
Background. The use of qualitative research (QR) methods is recommended as good practice in discrete choice experiments (DCEs). This study investigated the use and reporting of QR to inform the design and/or interpretation of healthcare-related DCEs and explored the perceived usefulness of such methods. Methods. DCEs were identified from a systematic search of the MEDLINE database. Studies were classified by the quantity of QR reported (none, basic, or extensive). Authors (n = 91) of papers reporting the use of QR were invited to complete an online survey eliciting their views about using the methods. Results. A total of 254 healthcare DCEs were included in the review; of these, 111 (44%) did not report using any qualitative methods; 114 (45%) reported “basic” information; and 29 (11%) reported or cited “extensive” use of qualitative methods. Studies reporting the use of qualitative methods used them to select attributes and/or levels (n = 95; 66%) and/or pilot the DCE survey (n = 26; 18%). Popular qualitative methods included focus groups (n = 63; 44%) and interviews (n = 109; 76%). Forty-four studies (31%) reported the analytical approach, with content (n = 10; 7%) and framework analysis (n = 5; 4%) most commonly reported. The survey identified that all responding authors (n = 50; 100%) found that qualitative methods added value to their DCE study, but many (n = 22; 44%) reported that journals were uninterested in the reporting of QR results. Conclusions. Despite recommendations that QR methods be used alongside DCEs, the use of QR methods is not consistently reported. The lack of reporting risks the inference that QR methods are of little use in DCE research, contradicting practitioners’ assessments. Explicit guidelines would enable more clarity and consistency in reporting, and journals should facilitate such reporting via online supplementary materials. PMID:28061040
The Role of Qualitative Research Methods in Discrete Choice Experiments.
Vass, Caroline; Rigby, Dan; Payne, Katherine
2017-04-01
The use of qualitative research (QR) methods is recommended as good practice in discrete choice experiments (DCEs). This study investigated the use and reporting of QR to inform the design and/or interpretation of healthcare-related DCEs and explored the perceived usefulness of such methods. DCEs were identified from a systematic search of the MEDLINE database. Studies were classified by the quantity of QR reported (none, basic, or extensive). Authors ( n = 91) of papers reporting the use of QR were invited to complete an online survey eliciting their views about using the methods. A total of 254 healthcare DCEs were included in the review; of these, 111 (44%) did not report using any qualitative methods; 114 (45%) reported "basic" information; and 29 (11%) reported or cited "extensive" use of qualitative methods. Studies reporting the use of qualitative methods used them to select attributes and/or levels ( n = 95; 66%) and/or pilot the DCE survey ( n = 26; 18%). Popular qualitative methods included focus groups ( n = 63; 44%) and interviews ( n = 109; 76%). Forty-four studies (31%) reported the analytical approach, with content ( n = 10; 7%) and framework analysis ( n = 5; 4%) most commonly reported. The survey identified that all responding authors ( n = 50; 100%) found that qualitative methods added value to their DCE study, but many ( n = 22; 44%) reported that journals were uninterested in the reporting of QR results. Despite recommendations that QR methods be used alongside DCEs, the use of QR methods is not consistently reported. The lack of reporting risks the inference that QR methods are of little use in DCE research, contradicting practitioners' assessments. Explicit guidelines would enable more clarity and consistency in reporting, and journals should facilitate such reporting via online supplementary materials.
Quantum-ring spin interference device tuned by quantum point contacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diago-Cisneros, Leo; Mireles, Francisco
2013-11-21
We introduce a spin-interference device that comprises a quantum ring (QR) with three embedded quantum point contacts (QPCs) and study theoretically its spin transport properties in the presence of Rashba spin-orbit interaction. Two of the QPCs conform the lead-to-ring junctions while a third one is placed symmetrically in the upper arm of the QR. Using an appropriate scattering model for the QPCs and the S-matrix scattering approach, we analyze the role of the QPCs on the Aharonov-Bohm (AB) and Aharonov-Casher (AC) conductance oscillations of the QR-device. Exact formulas are obtained for the spin-resolved conductances of the QR-device as a functionmore » of the confinement of the QPCs and the AB/AC phases. Conditions for the appearance of resonances and anti-resonances in the spin-conductance are derived and discussed. We predict very distinctive variations of the QR-conductance oscillations not seen in previous QR proposals. In particular, we find that the interference pattern in the QR can be manipulated to a large extend by varying electrically the lead-to-ring topological parameters. The latter can be used to modulate the AB and AC phases by applying gate voltage only. We have shown also that the conductance oscillations exhibits a crossover to well-defined resonances as the lateral QPC confinement strength is increased, mapping the eigenenergies of the QR. In addition, unique features of the conductance arise by varying the aperture of the upper-arm QPC and the Rashba spin-orbit coupling. Our results may be of relevance for promising spin-orbitronics devices based on quantum interference mechanisms.« less
Use of quick response coding to create interactive patient and provider resources.
Bellot, Jennifer; Shaffer, Kathryn; Wang, Mary
2015-04-01
Since their creation more than 20 years ago, the proliferation of Quick Response (QR) codes has expanded tremendously. Little was found in the literature to support the innovative use of QR coding in the classroom or in health care provision. Thus, the authors created a doctoral-level practicum experience using QR coding to create interactive, individualized patient or provider resource guides. Short, descriptive surveys were used before and after implementation of the practicum experience to determine students' comfort level using QR technology, their knowledge base, ease of use, and overall satisfaction with the practicum. Students reported high levels of satisfaction with this exercise, and all agreed that use of QR coding could have important implications in the clinical environment. This practicum experience was a creative, practical, and valuable example of integrating emerging technology into individualized patient care. Copyright 2015, SLACK Incorporated.
[Induction of NAD(P)H: quinone reductase by anticarcinogenic ingredients of tea].
Qi, L; Han, C
1998-09-30
By assaying the activity of NAD(P)H: quinone reductase (QR) in Hep G2 cells exposed to inducing agents, a variety of ingredients in tea, we compared their abilities on inducing QR and preventing cancer. The results showed that tea polyphenols, tea pigments and mixed tea were all able to induce the activity of QR significantly. The single-component ingredients of tea polyphenols and tea pigments, including thearubigens, EGCG and ECG, also enhanced the activity of QR. But EGC, EC, theaflavins, tea polysaccharide and tea caffeine, showed no apparent induction of QR. We found that among those tea ingredients studied, the multi-component ingredients were more effective than the single-component ones. So we thought that the abilities of antioxidation and cancer prevention of tea depended on the combined effects of several kinds of active ingredients, which mainly include tea polyphenols and tea pigments.
ERIC Educational Resources Information Center
Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark
2012-01-01
A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…
Use Them ... or Lose Them? The Case for and against Using QR Codes
ERIC Educational Resources Information Center
Cunningham, Chuck; Dull, Cassie
2011-01-01
A quick-response (QR) code is a two-dimensional, black-and-white square barcode and links directly to a URL of one's choice. When the code is scanned with a smartphone, it will automatically redirect the user to the designated URL. QR codes are popping up everywhere--billboards, magazines, posters, shop windows, TVs, computer screens, and more.…
Two-dimensional QR-coded metamaterial absorber
NASA Astrophysics Data System (ADS)
Sui, Sai; Ma, Hua; Wang, Jiafu; Pang, Yongqiang; Zhang, Jieqiu; Qu, Shaobo
2016-01-01
In this paper, the design of metamaterial absorbers is proposed based on QR coding and topology optimization. Such absorbers look like QR codes and can be recognized by decoding softwares as well as mobile phones. To verify the design, two lightweight wideband absorbers are designed, which can achieve wideband absorption above 90 % in 6.68-19.30 and 7.00-19.70 GHz, respectively. More importantly, polarization-independent absorption over 90 % can be maintained under incident angle within 55°. The QR code absorber not only can achieve wideband absorption, but also can carry information such as texts and Web sites. They are of important values in applications such identification and electromagnetic protection.
β-carboline derivatives and diphenols from soy sauce are in vitro quinone reductase (QR) inducers.
Li, Ying; Zhao, Mouming; Parkin, Kirk L
2011-03-23
A murine hepatoma (Hepa 1c1c7) cellular bioassay was used to guide the isolation of phase II enzyme inducers from fermented soy sauce, using quinone reductase (QR) as a biomarker. A crude ethyl acetate extract, accounting for 8.7% of nonsalt soluble solids of soy sauce, was found to double relative QR specific activity at 25 μg/mL (concentration required to double was defined as a "CD value"). Further silica gel column fractionation yielded 17 fractions, 16 of which exhibited CD values for QR induction of <100 μg/mL. The four most potent fractions were subfractionated by column and preparative thin layer chromatography, leading to the isolation and identification of two phenolic compounds (catechol and daidzein) and two β-carbolines (flazin and perlolyrin), with respective CD values of 8, 35, 42, and 2 μM. Western blots confirmed that the increases in QR activity corresponded to dose-dependent increases in cellular levels of NAD[P]H:quinone oxidoreductase 1 protein by these four QR inducers. To the authors' knowledge, this is the first report on the ability of β-carboline-derived alkaloids to induce phase II enzymes.
Scalable Parallel Density-based Clustering and Applications
NASA Astrophysics Data System (ADS)
Patwary, Mostofa Ali
2014-04-01
Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.
An In vitro evaluation of the reliability of QR code denture labeling technique
Poovannan, Sindhu; Jain, Ashish R.; Krishnan, Cakku Jalliah Venkata; Chandran, Chitraa R.
2016-01-01
Statement of Problem: Positive identification of the dead after accidents and disasters through labeled dentures plays a key role in forensic scenario. A number of denture labeling methods are available, and studies evaluating their reliability under drastic conditions are vital. Aim: This study was conducted to evaluate the reliability of QR (Quick Response) Code labeled at various depths in heat-cured acrylic blocks after acid treatment, heat treatment (burns), and fracture in forensics. It was an in vitro study. Materials and Methods: This study included 160 specimens of heat-cured acrylic blocks (1.8 cm × 1.8 cm) and these were divided into 4 groups (40 samples per group). QR Codes were incorporated in the samples using clear acrylic sheet and they were assessed for reliability under various depths, acid, heat, and fracture. Data were analyzed using Chi-square test, test of proportion. Results: The QR Code inclusion technique was reliable under various depths of acrylic sheet, acid (sulfuric acid 99%, hydrochloric acid 40%) and heat (up to 370°C). Results were variable with fracture of QR Code labeled acrylic blocks. Conclusion: Within the limitations of the study, by analyzing the results, it was clearly indicated that the QR Code technique was reliable under various depths of acrylic sheet, acid, and heat (370°C). Effectiveness varied in fracture and depended on the level of distortion. This study thus suggests that QR Code is an effective and simpler denture labeling method. PMID:28123284
DNA Barcode Goes Two-Dimensions: DNA QR Code Web Server
Li, Huan; Xing, Hang; Liang, Dong; Jiang, Kun; Pang, Xiaohui; Song, Jingyuan; Chen, Shilin
2012-01-01
The DNA barcoding technology uses a standard region of DNA sequence for species identification and discovery. At present, “DNA barcode” actually refers to DNA sequences, which are not amenable to information storage, recognition, and retrieval. Our aim is to identify the best symbology that can represent DNA barcode sequences in practical applications. A comprehensive set of sequences for five DNA barcode markers ITS2, rbcL, matK, psbA-trnH, and CO1 was used as the test data. Fifty-three different types of one-dimensional and ten two-dimensional barcode symbologies were compared based on different criteria, such as coding capacity, compression efficiency, and error detection ability. The quick response (QR) code was found to have the largest coding capacity and relatively high compression ratio. To facilitate the further usage of QR code-based DNA barcodes, a web server was developed and is accessible at http://qrfordna.dnsalias.org. The web server allows users to retrieve the QR code for a species of interests, convert a DNA sequence to and from a QR code, and perform species identification based on local and global sequence similarities. In summary, the first comprehensive evaluation of various barcode symbologies has been carried out. The QR code has been found to be the most appropriate symbology for DNA barcode sequences. A web server has also been constructed to allow biologists to utilize QR codes in practical DNA barcoding applications. PMID:22574113
[INVITED] Luminescent QR codes for smart labelling and sensing
NASA Astrophysics Data System (ADS)
Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.
2018-05-01
QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.
Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, A.; Henderson, T.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
Comparison of multihardware parallel implementations for a phase unwrapping algorithm
NASA Astrophysics Data System (ADS)
Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo
2018-04-01
Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.
A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations
NASA Technical Reports Server (NTRS)
Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw
2005-01-01
A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.
Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie
2014-01-01
It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.
Mukherjee, Anupam; Pati, Kamalkishore; Liu, Rai-Shung
2009-08-21
We report here a convenient synthesis of tetrabenzo[de,hi,mn,qr]naphthacenes from 1,2-di(phenanthren-4-yl)ethynes through initial Brønsted acid-catalyzed hydroarylation, followed by FeCl(3)-oxidative coupling reactions. This new method is applicable to tetrabenzo[de,hi,mn,qr]naphthacenes bearing various alkyl substituents.
Chamarthi, Bindu; Ezrokhi, Michael; Rutty, Dean; Cincotta, Anthony H
2016-11-01
Type 2 diabetes mellitus (T2DM) is associated with a substantially increased risk of cardiovascular disease (CVD). Bromocriptine-QR (B-QR), a quick release sympatholytic dopamine D 2 receptor agonist, is a FDA-approved therapy for T2DM which may provide CVD risk reduction. Metformin is considered to be an agent with a potential cardioprotective benefit. This large placebo controlled clinical study assessed the impact of B-QR addition to existing metformin therapy on CVD outcomes in T2DM subjects. 1791 subjects (1208 B-QR; 583 placebo) on metformin ± another anti-diabetes therapy at baseline derived from the Cycloset Safety Trial, a 12-month, randomized, multicenter, placebo-controlled, double-blind study in T2DM, were included in this study. The primary CVD endpoint evaluated was treatment impact on CVD event rate, prespecified as a composite of time to first myocardial infarction, stroke, coronary revascularization, or hospitalization for unstable angina/congestive heart failure. Impact on glycemic control was evaluated as a secondary analysis. The composite CVD end point occurred in 16/1208 B-QR treated (1.3%) and 18/583 placebo treated (3.1%) subjects resulting in a 55% CVD hazard risk reduction (intention-to-treat, Cox regression analysis; HR: 0.45 [0.23-0.88], p = 0.028). Kaplan-Meier curves demonstrated a significantly lower cumulative incidence rate of the CVD endpoint in the B-QR treatment group (Log-Rank p = 0.017). In subjects with poor glycemic control (HbA1c ≥ 7.5) at baseline, B-QR therapy relative to placebo resulted in a significant mean %HbA1c reduction of -0.59 at week 12 and -0.51 at week 52 respectively (p < 0.001 for both) and a 10 fold higher percent of subjects achieving HbA1c goal of ≤7% by week 52 (B-QR 30%, placebo 3%; p = 0.003). These findings suggest that in T2DM subjects on metformin, BQR therapy may represent an effective strategy for reducing CVD risk. Cycloset Safety Trial registration: ClinicalTrials.gov Identifier: NCT00377676.
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is a system that uses vision algorithms from all levels of processing to perform for a high level application (e.g., object recognition). An IVS normally involves algorithms from low level, intermediate level, and high level vision. Designing parallel architectures for vision systems is of tremendous interest to researchers. Several issues are addressed in parallel architectures and parallel algorithms for integrated vision systems.
Scanning Gate Microscopy on a Quantum Hall Interferometer
NASA Astrophysics Data System (ADS)
Martins, Frederico; Hackens, Benoit; Dutu, Augustin; Bayot, Vincent; Sellier, Hermann; Huant, Serge; Desplanque, Ludovic; Wallart, Xavier; Pala, Marco
2010-03-01
We perform scanning gate microscopy (SGM) experiments [1] at very low temperature (down to 100 mK) in the Quantum Hall regime on a mesoscopic quantum ring (QR) patterned in an InGaAs/InAlAs heterostructure. Close to integer filling factors ν=6, 8 and 10,the magnetoresistance of the QR is decorated with fast periodic oscillations, with a magnetic field period close to AB/ν, where AB is the Aharonov-Bohm period. We analyze the data in terms of electron tunneling between edge states trapped inside the QR and those transmitted through the QR openings [2]. SGM images reveal that the tip-induced perturbation of the electron confining potential gives rise to a rich pattern of narrow and wide concentric conductance fringes in the vicinity of the QR. [1] F. Martins et al. Phys. Rev. Lett. 99 136807 (2007); B. Hackens et al. Nat. Phys. 2 826 (2006). [2] B. Rosenow and B. I. Halperin, Phys. Rev. Lett. 98, 106801 (2007).
Hirose, K; Kawasaki, Y; Kotani, K; Abiko, K; Sato, H
2004-05-01
Quinolone-resistant (QR) mutants of Mycoplasma bovirhinis strain PG43 (type strain) were generated by stepwise selection in increasing concentrations of enrofloxacin (ENR). An alteration was found in the quinolone resistance-determining region (QRDR) of the parC gene coding for the ParC subunit of topoisomerase IV from these mutants, but not in the gyrA, gyrB, and parE gene coding for the GyrA and GyrB subunits of DNA gyrase and the ParE subunit of topoisomerase IV. Similarly, such an alteration in QRDR of parC was found in the field isolates of M. bovirhinis, which possessed various levels of QR. The substitution of leucine (Leu) by serine (Ser) at position 80 of QRDR of ParC was observed in both QR-mutants and QR-isolates. This is the first report of QR based on a point mutation of the parC gene in M. bovirhinis.
Diffusion Forecasting Model with Basis Functions from QR-Decomposition
NASA Astrophysics Data System (ADS)
Harlim, John; Yang, Haizhao
2018-06-01
The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.
Diffusion Forecasting Model with Basis Functions from QR-Decomposition
NASA Astrophysics Data System (ADS)
Harlim, John; Yang, Haizhao
2017-12-01
The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.
Efficient Parallel Kernel Solvers for Computational Fluid Dynamics Applications
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1997-01-01
Distributed-memory parallel computers dominate today's parallel computing arena. These machines, such as Intel Paragon, IBM SP2, and Cray Origin2OO, have successfully delivered high performance computing power for solving some of the so-called "grand-challenge" problems. Despite initial success, parallel machines have not been widely accepted in production engineering environments due to the complexity of parallel programming. On a parallel computing system, a task has to be partitioned and distributed appropriately among processors to reduce communication cost and to attain load balance. More importantly, even with careful partitioning and mapping, the performance of an algorithm may still be unsatisfactory, since conventional sequential algorithms may be serial in nature and may not be implemented efficiently on parallel machines. In many cases, new algorithms have to be introduced to increase parallel performance. In order to achieve optimal performance, in addition to partitioning and mapping, a careful performance study should be conducted for a given application to find a good algorithm-machine combination. This process, however, is usually painful and elusive. The goal of this project is to design and develop efficient parallel algorithms for highly accurate Computational Fluid Dynamics (CFD) simulations and other engineering applications. The work plan is 1) developing highly accurate parallel numerical algorithms, 2) conduct preliminary testing to verify the effectiveness and potential of these algorithms, 3) incorporate newly developed algorithms into actual simulation packages. The work plan has well achieved. Two highly accurate, efficient Poisson solvers have been developed and tested based on two different approaches: (1) Adopting a mathematical geometry which has a better capacity to describe the fluid, (2) Using compact scheme to gain high order accuracy in numerical discretization. The previously developed Parallel Diagonal Dominant (PDD) algorithm and Reduced Parallel Diagonal Dominant (RPDD) algorithm have been carefully studied on different parallel platforms for different applications, and a NASA simulation code developed by Man M. Rai and his colleagues has been parallelized and implemented based on data dependency analysis. These achievements are addressed in detail in the paper.
Seo, Ji Yeon; Lim, Soon Sung; Park, Jia; Lim, Ji-Sun; Kim, Hyo Jung; Kang, Hui Jung; Yoon Park, Jung Han
2010-01-01
Our previous study demonstrated that methanolic extract of Chrysanthemum zawadskii Herbich var. latilobum Kitamura (Compositae) has the potential to induce detoxifying enzymes such as NAD(P)H:(quinone acceptor) oxidoreductase 1 (EC 1.6.99.2) (NQO1, QR) and glutathione S-transferase (GST). In this study we further fractionated methanolic extract of Chrysanthemum zawadskii and investigated the detoxifying enzyme-inducing potential of each fraction. The fraction (CZ-6) shown the highest QR-inducing activity was found to contain (+)-(3S,4S,5R,8S)-(E)-8-acetoxy-4-hydroxy-3-isovaleroyloxy-2-(hexa-2,4-diynyliden)-1,6-dioxaspiro [4,5] decane and increased QR enzyme activity in a dose-dependent manner. Furthermore, CZ-6 fraction caused a dose-dependent enhancement of luciferase activity in HepG2-C8 cells generated by stably transfecting antioxidant response element-luciferase gene construct, suggesting that it induces antioxidant/detoxifying enzymes through antioxidant response element (ARE)-mediated transcriptional activation of the relevant genes. Although CZ-6 fraction failed to induce hepatic QR in mice over the control, it restored QR activity suppressed by CCl4 treatment to the control level. Hepatic injury induced by CCl4 was also slightly protected by pretreatment with CZ-6. In conclusion, although CZ-6 fractionated from methanolic extract of Chrysanthemum zawadskii did not cause a significant QR induction in mice organs such as liver, kidney, and stomach, it showed protective effect from liver damage caused by CCl4. PMID:20461196
Seo, Ji Yeon; Lim, Soon Sung; Park, Jia; Lim, Ji-Sun; Kim, Hyo Jung; Kang, Hui Jung; Yoon Park, Jung Han; Kim, Jong-Sang
2010-04-01
Our previous study demonstrated that methanolic extract of Chrysanthemum zawadskii Herbich var. latilobum Kitamura (Compositae) has the potential to induce detoxifying enzymes such as NAD(P)H:(quinone acceptor) oxidoreductase 1 (EC 1.6.99.2) (NQO1, QR) and glutathione S-transferase (GST). In this study we further fractionated methanolic extract of Chrysanthemum zawadskii and investigated the detoxifying enzyme-inducing potential of each fraction. The fraction (CZ-6) shown the highest QR-inducing activity was found to contain (+)-(3S,4S,5R,8S)-(E)-8-acetoxy-4-hydroxy-3-isovaleroyloxy-2-(hexa-2,4-diynyliden)-1,6-dioxaspiro [4,5] decane and increased QR enzyme activity in a dose-dependent manner. Furthermore, CZ-6 fraction caused a dose-dependent enhancement of luciferase activity in HepG2-C8 cells generated by stably transfecting antioxidant response element-luciferase gene construct, suggesting that it induces antioxidant/detoxifying enzymes through antioxidant response element (ARE)-mediated transcriptional activation of the relevant genes. Although CZ-6 fraction failed to induce hepatic QR in mice over the control, it restored QR activity suppressed by CCl(4) treatment to the control level. Hepatic injury induced by CCl(4) was also slightly protected by pretreatment with CZ-6. In conclusion, although CZ-6 fractionated from methanolic extract of Chrysanthemum zawadskii did not cause a significant QR induction in mice organs such as liver, kidney, and stomach, it showed protective effect from liver damage caused by CCl(4).
Modeling energy expenditure in children and adolescents using quantile regression
Yang, Yunwen; Adolph, Anne L.; Puyau, Maurice R.; Vohra, Firoz A.; Zakeri, Issa F.
2013-01-01
Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in energy expenditure (EE). Study objective is to apply quantile regression (QR) to predict EE and determine quantile-dependent variation in covariate effects in nonobese and obese children. First, QR models will be developed to predict minute-by-minute awake EE at different quantile levels based on heart rate (HR) and physical activity (PA) accelerometry counts, and child characteristics of age, sex, weight, and height. Second, the QR models will be used to evaluate the covariate effects of weight, PA, and HR across the conditional EE distribution. QR and ordinary least squares (OLS) regressions are estimated in 109 children, aged 5–18 yr. QR modeling of EE outperformed OLS regression for both nonobese and obese populations. Average prediction errors for QR compared with OLS were not only smaller at the median τ = 0.5 (18.6 vs. 21.4%), but also substantially smaller at the tails of the distribution (10.2 vs. 39.2% at τ = 0.1 and 8.7 vs. 19.8% at τ = 0.9). Covariate effects of weight, PA, and HR on EE for the nonobese and obese children differed across quantiles (P < 0.05). The associations (linear and quadratic) between PA and HR with EE were stronger for the obese than nonobese population (P < 0.05). In conclusion, QR provided more accurate predictions of EE compared with conventional OLS regression, especially at the tails of the distribution, and revealed substantially different covariate effects of weight, PA, and HR on EE in nonobese and obese children. PMID:23640591
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh
2015-07-01
This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.
Parallel Algorithms for Least Squares and Related Computations.
1991-03-22
for dense computations in linear algebra . The work has recently been published in a general reference book on parallel algorithms by SIAM. AFO SR...written his Ph.D. dissertation with the principal investigator. (See publication 6.) • Parallel Algorithms for Dense Linear Algebra Computations. Our...and describe and to put into perspective a selection of the more important parallel algorithms for numerical linear algebra . We give a major new
Genetic algorithms using SISAL parallel programming language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tejada, S.
1994-05-06
Genetic algorithms are a mathematical optimization technique developed by John Holland at the University of Michigan [1]. The SISAL programming language possesses many of the characteristics desired to implement genetic algorithms. SISAL is a deterministic, functional programming language which is inherently parallel. Because SISAL is functional and based on mathematical concepts, genetic algorithms can be efficiently translated into the language. Several of the steps involved in genetic algorithms, such as mutation, crossover, and fitness evaluation, can be parallelized using SISAL. In this paper I will l discuss the implementation and performance of parallel genetic algorithms in SISAL.
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
2015-04-12
Avoiding communication in the Lanczos bidiagonalization routine and associated Least Squares QR solver Erin Carson Electrical Engineering and...Bidiagonalization Routine and Associated Least Squares QR Solver 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...throughout scienti c codes , are often the bottlenecks in application perfor- mance due to a low computation/communication ratio. In this paper we develop
Optical information encryption based on incoherent superposition with the help of the QR code
NASA Astrophysics Data System (ADS)
Qin, Yi; Gong, Qiong
2014-01-01
In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.
Conda-Sheridan, Martin; Marler, Laura; Park, Eun-Jung; Kondratyuk, Tamara P.; Jermihov, Katherine; Mesecar, Andrew D.; Pezzuto, John M.; Asolkar, Ratnakar N.; Fenical, William; Cushman, Mark
2010-01-01
The isolation of 2-bromo-1-hydroxyphenazine from a marine Streptomyces sp., strain CNS284, and its activity against NFκB, suggested that a short and flexible route for the synthesis of this metabolite and a variety of phenazine analogues be developed. Numerous phenazines were subsequently prepared and evaluated as inducers of quinone reductase 1 (QR1) and inhibitors of quinone reductase 2 (QR2), NF-κB, and inducible nitric oxide synthase (iNOS). Several of the active phenazine derivatives displayed IC50 values vs. QR1 induction and QR2 inhibition in the nanomolar range, suggesting they may find utility as cancer chemopreventive agents. PMID:21105712
Runtime support for parallelizing data mining algorithms
NASA Astrophysics Data System (ADS)
Jin, Ruoming; Agrawal, Gagan
2002-03-01
With recent technological advances, shared memory parallel machines have become more scalable, and offer large main memories and high bus bandwidths. They are emerging as good platforms for data warehousing and data mining. In this paper, we focus on shared memory parallelization of data mining algorithms. We have developed a series of techniques for parallelization of data mining algorithms, including full replication, full locking, fixed locking, optimized full locking, and cache-sensitive locking. Unlike previous work on shared memory parallelization of specific data mining algorithms, all of our techniques apply to a large number of common data mining algorithms. In addition, we propose a reduction-object based interface for specifying a data mining algorithm. We show how our runtime system can apply any of the technique we have developed starting from a common specification of the algorithm.
NASA Astrophysics Data System (ADS)
Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.
2015-12-01
We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.
Parallel transformation of K-SVD solar image denoising algorithm
NASA Astrophysics Data System (ADS)
Liang, Youwen; Tian, Yu; Li, Mei
2017-02-01
The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.
A parallel simulated annealing algorithm for standard cell placement on a hypercube computer
NASA Technical Reports Server (NTRS)
Jones, Mark Howard
1987-01-01
A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less
Review of An Introduction to Parallel and Vector Scientific Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.; Lefton, Lew
2006-06-30
On one hand, the field of high-performance scientific computing is thriving beyond measure. Performance of leading-edge systems on scientific calculations, as measured say by the Top500 list, has increased by an astounding factor of 8000 during the 15-year period from 1993 to 2008, which is slightly faster even than Moore's Law. Even more importantly, remarkable advances in numerical algorithms, numerical libraries and parallel programming environments have led to improvements in the scope of what can be computed that are entirely on a par with the advances in computing hardware. And these successes have spread far beyond the confines of largemore » government-operated laboratories, many universities, modest-sized research institutes and private firms now operate clusters that differ only in scale from the behemoth systems at the large-scale facilities. In the wake of these recent successes, researchers from fields that heretofore have not been part of the scientific computing world have been drawn into the arena. For example, at the recent SC07 conference, the exhibit hall, which long has hosted displays from leading computer systems vendors and government laboratories, featured some 70 exhibitors who had not previously participated. In spite of all these exciting developments, and in spite of the clear need to present these concepts to a much broader technical audience, there is a perplexing dearth of training material and textbooks in the field, particularly at the introductory level. Only a handful of universities offer coursework in the specific area of highly parallel scientific computing, and instructors of such courses typically rely on custom-assembled material. For example, the present reviewer and Robert F. Lucas relied on materials assembled in a somewhat ad-hoc fashion from colleagues and personal resources when presenting a course on parallel scientific computing at the University of California, Berkeley, a few years ago. Thus it is indeed refreshing to see the publication of the book An Introduction to Parallel and Vector Scientic Computing, written by Ronald W. Shonkwiler and Lew Lefton, both of the Georgia Institute of Technology. They have taken the bull by the horns and produced a book that appears to be entirely satisfactory as an introductory textbook for use in such a course. It is also of interest to the much broader community of researchers who are already in the field, laboring day by day to improve the power and performance of their numerical simulations. The book is organized into 11 chapters, plus an appendix. The first three chapters describe the basics of system architecture including vector, parallel and distributed memory systems, the details of task dependence and synchronization, and the various programming models currently in use - threads, MPI and OpenMP. Chapters four through nine provide a competent introduction to floating-point arithmetic, numerical error and numerical linear algebra. Some of the topics presented include Gaussian elimination, LU decomposition, tridiagonal systems, Givens rotations, QR decompositions, Gauss-Seidel iterations and Householder transformations. Chapters 10 and 11 introduce Monte Carlo methods and schemes for discrete optimization such as genetic algorithms.« less
Elbarbry, Fawzy; Ung, Aimy; Abdelkawy, Khaled
2018-01-01
Quercetin (QR) and thymoquinone (TQ) are herbal remedies that are currently extensively used by the general population to prevent and treat various chronic conditions. Therefore, investigating the potential of pharmacokinetic interactions caused by the concomitant use of these herbal remedies and conventional medicine is warranted to ensure patient safety. This study was conducted to determine the inhibitory effect of QR and TQ, two commonly used remedies, on the activities of selected cytochrome P450 (CYP) enzymes that play an important role in drug metabolism and/or toxicology. The in vitro studies were conducted using fluorescence-based high throughput assays using human c-DNA baculovirus expressed CYP enzymes. For measuring CYP2E1 activity, a validated High-performance liquid chromatography (HPLC) assay was utilized to measure the formation of 6-hydroxychlorzoxazone. The obtained half-maximum inhibitory concentration values with known positive control inhibitors of this study were comparable to the published values indicating accurate experimental techniques. Although QR did not show any significant effect on CYP1A2 and CYP2E1, it exhibited a strong inhibitory effect against CYP2D6 and a moderate effect against CYP2C19 and CYP3A4. On the other hand, TQ demonstrated a strong and a moderate inhibitory effect against CYP3A4 and CYP2C19, respectively. The findings of this study may indicate that consumption of QR or TQ, in the form of food or dietary supplements, with drugs that are metabolized by CYP2C19, CYP2D6, or CYP3A4 may cause significant herb-drug interactions. Neither QR nor TQ has any significant inhibitory effect on the activity of CYP1A2 or CYP2E1 enzymesBoth QR and TQ have a moderate to strong inhibitory effect on CYP3A4 activityQR has a moderate inhibitory effect on CYP2C19 and a strong inhibitory effect on CYP2D6Both QR and TQ are moderate inhibitors of the CYP2C9 activity. Abbreviations used: ABT: Aminobenztriazole, BZF: 7,8 Benzoflavone, CYP: Cytochrome P450, GB: Gingko Biloba, IC 50 : Half-maximum inhibitory concentration, KTZ: Ketoconazole, QND: Quinidine, QR: Quercetin, TCP: Tranylcypromine, TQ: Thymoquinone.
2012-01-01
Background Public reporting of hospital quality is to enable providers, patients and the public to make comparisons regarding the quality of care and thus contribute to informed decisions. It stimulates quality improvement activities in hospitals and thus positively impacts treatment results. Hospitals often use publicly reported data for further internal or external purposes. As of 2005, German hospitals are obliged to publish structured quality reports (QR) every two years. This gives them the opportunity to demonstrate their performance by number, type and quality in a transparent way. However, it constitutes a major burden to hospitals to generate and publish data required, and it is yet unknown if hospitals feel adequately represented and at the same time consider the effort appropriate. This study assesses hospital leaders’ judgement about the capability of QR to put legally defined aims effectively and efficiently into practice. It also explores the additional purposes hospitals use their QR for. Methods In a cross-sectional observational study, a representative random sample out of 2,064 German hospitals (N=748) was invited to assess QR via questionnaire; 333 hospitals participated. We recorded the suitability of QR for representing number, type and quality of services, the adequacy of cost and benefits (6-level Likert scales) and additional purposes QR are used for (free text question). For representation purposes, the net sample was weighted for hospital size and hospital ownership (direct standardization). Data was analyzed descriptively and using inferential statistics (chi-2 test) or for the purpose of generating hypotheses. Results German hospitals rated the QR as suitable to represent the number of services but less so for the type and quality of services. The cost-benefit ratio was seen as inadequate. There were no significant differences between hospitals of different size or ownership. Public hospitals additionally used their reports for mostly internal purposes (e.g. comparison with competitors, quality management) whereas private ones used them externally (e.g. communication, marketing) (p=0.024, chi-2 test, hypotheses-generating level). Conclusions German hospitals consider the mandatory QR as only partially capable to put the legally defined aims effectively and efficiently into practice. In order for public reporting to achieve its potentially positive effects, the QR must be more closely aligned to the needs of hospitals. PMID:23114403
Parallel Algorithms for Groebner-Basis Reduction
1987-09-25
22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report
Parallel and fault-tolerant algorithms for hypercube multiprocessors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aykanat, C.
1988-01-01
Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less
Hybrid massively parallel fast sweeping method for static Hamilton-Jacobi equations
NASA Astrophysics Data System (ADS)
Detrixhe, Miles; Gibou, Frédéric
2016-10-01
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton-Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling, and show state-of-the-art speedup values for the fast sweeping method.
Location Based Service in Indoor Environment Using Quick Response Code Technology
NASA Astrophysics Data System (ADS)
Hakimpour, F.; Zare Zardiny, A.
2014-10-01
Today by extensive use of intelligent mobile phones, increased size of screens and enriching the mobile phones by Global Positioning System (GPS) technology use of location based services have been considered by public users more than ever.. Based on the position of users, they can receive the desired information from different LBS providers. Any LBS system generally includes five main parts: mobile devices, communication network, positioning system, service provider and data provider. By now many advances have been gained in relation to any of these parts; however the users positioning especially in indoor environments is propounded as an essential and critical issue in LBS. It is well known that GPS performs too poorly inside buildings to provide usable indoor positioning. On the other hand, current indoor positioning technologies such as using RFID or WiFi network need different hardware and software infrastructures. In this paper, we propose a new method to overcome these challenges. This method is using the Quick Response (QR) Code Technology. QR Code is a 2D encrypted barcode with a matrix structure which consists of black modules arranged in a square grid. Scanning and data retrieving process from QR Code is possible by use of different camera-enabled mobile phones only by installing the barcode reader software. This paper reviews the capabilities of QR Code technology and then discusses the advantages of using QR Code in Indoor LBS (ILBS) system in comparison to other technologies. Finally, some prospects of using QR Code are illustrated through implementation of a scenario. The most important advantages of using this new technology in ILBS are easy implementation, spending less expenses, quick data retrieval, possibility of printing the QR Code on different products and no need for complicated hardware and software infrastructures.
NASA Astrophysics Data System (ADS)
Teal, Paul D.; Eccles, Craig
2015-04-01
The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.
Snyder, Matthew J; Nguyen, Dana R; Womack, Jasmyne J; Bunt, Christopher W; Westerfield, Katie L; Bell, Adriane E; Ledford, Christy J W
2018-03-01
Collection of feedback regarding medical student clinical experiences for formative or summative purposes remains a challenge across clinical settings. The purpose of this study was to determine whether the use of a quick response (QR) code-linked online feedback form improves the frequency and efficiency of rater feedback. In 2016, we compared paper-based feedback forms, an online feedback form, and a QR code-linked online feedback form at 15 family medicine clerkship sites across the United States. Outcome measures included usability, number of feedback submissions per student, number of unique raters providing feedback, and timeliness of feedback provided to the clerkship director. The feedback method was significantly associated with usability, with QR code scoring the highest, and paper second. Accessing feedback via QR code was associated with the shortest time to prepare feedback. Across four rotations, separate repeated measures analyses of variance showed no effect of feedback system on the number of submissions per student or the number of unique raters. The results of this study demonstrate that preceptors in the family medicine clerkship rate QR code-linked feedback as a high usability platform. Additionally, this platform resulted in faster form completion than paper or online forms. An overarching finding of this study is that feedback forms must be portable and easily accessible. Potential implementation barriers and the social norm for providing feedback in this manner need to be considered.
The feasibility of QR-code prescription in Taiwan.
Lin, C-H; Tsai, F-Y; Tsai, W-L; Wen, H-W; Hu, M-L
2012-12-01
An ideal Health Care Service is a service system that focuses on patients. Patients in Taiwan have the freedom to fill their prescriptions at any pharmacies contracted with National Health Insurance. Each of these pharmacies uses its own computer system. So far, there are at least ten different systems on the market in Taiwan. To transmit the prescription information from the hospital to the pharmacy accurately and efficiently presents a great issue. This study consisted of two-dimensional applications using a QR-code to capture Patient's identification and prescription information from the hospitals as well as using a webcam to read the QR-code and transfer all data to the pharmacy computer system. Two hospitals and 85 community pharmacies participated in the study. During the trial, all participant pharmacies appraised highly of the accurate transmission of the prescription information. The contents in QR-code prescriptions from Taipei area were picked up efficiently and accurately in pharmacies at Taichung area (middle Taiwan) without software system limit and area limitation. The QR-code device received a patent (No. M376844, March 2010) from Intellectual Property Office Ministry of Economic Affair, China. Our trial has proven that QR-code prescription can provide community pharmacists an efficient, accurate and inexpensive device to digitalize the prescription contents. Consequently, pharmacists can offer better quality of pharmacy service to patients. © 2012 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Lombardo, Luigi; Saia, Sergio; Schillaci, Calogero; Mai, P. Martin; Huser, Raphaël
2018-05-01
Soil Organic Carbon (SOC) estimation is crucial to manage both natural and anthropic ecosystems and has recently been put under the magnifying glass after the Paris agreement 2016 due to its relationship with greenhouse gas. Statistical applications have dominated the SOC stock mapping at regional scale so far. However, the community has hardly ever attempted to implement Quantile Regression (QR) to spatially predict the SOC distribution. In this contribution, we test QR to estimate SOC stock (0-30 $cm$ depth) in the agricultural areas of a highly variable semi-arid region (Sicily, Italy, around 25,000 $km2$) by using topographic and remotely sensed predictors. We also compare the results with those from available SOC stock measurement. The QR models produced robust performances and allowed to recognize dominant effects among the predictors with respect to the considered quantile. This information, currently lacking, suggests that QR can discern predictor influences on SOC stock at specific sub-domains of each predictors. In this work, the predictive map generated at the median shows lower errors than those of the Joint Research Centre and International Soil Reference, and Information Centre benchmarks. The results suggest the use of QR as a comprehensive and effective method to map SOC using legacy data in agro-ecosystems. The R code scripted in this study for QR is included.
Cummings, Timothy S; Guralnik, Mario; Rosenbloom, Richard A; Petteruti, Michael P; Digian, Kelly; Lefante, Carolyn
2007-01-01
The current study assessed the safety, tolerability, and palatability of the experimental drug QR-441(a) using three dose formulations and three routes of administration. A 4-day study was carried out using a total of 132 chickens. A total of 11 groups were formed (12 chickens per group) subjected to varying concentrations and routes of administration of QR-441(a). Chickens were given a high, medium, or low dose of QR-441(a) in either feed, water, or both for a period of 4 days. In addition, one group was dosed intranasally, one drop per nostril four times a day. Although no lesions were found to suggest toxicity or irritability, the medium- and high-dose water groups reduced their water intake. This reduction in water intake suggests that chickens may find the medium and high water doses unpalatable. There was no reduction in water intake in the low-dose water groups or in any of the formulated feed groups. There was also no evidence of toxicity or irritability in the nasal-dose group. These data support the use of the low, medium, and high doses in feed and the use of the low-dose concentration in water for the administration of QR-441(a). The data also suggest that QR-441(a) can be administered intranasally without the presence of any adverse events.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Simultaneous multiple non-crossing quantile regression estimation using kernel constraints
Liu, Yufeng; Wu, Yichao
2011-01-01
Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842
Waseem, Mohammad; Tabassum, Heena; Bhardwaj, Monica; Parvez, Suhel
2017-09-01
The present study aimed to investigate the hepatoprotective effects of the bioflavonoid quercetin (QR) on cisplatin (CP)‑induced mitochondrial oxidative stress in the livers of rats, to elucidate the role of mitochondria in CP‑induced hepatotoxicity, and its underlying mechanism. Isolated liver mitochondria were incubated with 100 µg/ml CP and/or 50 µM QR in vitro. CP treatment triggered a significant increase in membrane lipid peroxidation (LPO) levels, protein carbonyl (PC) contents, and a decrease in reduced glutathione (GSH) and non‑protein thiol (NP‑SH) levels. In addition, CP caused a marked decline in the activities of enzymatic antioxidants and mitochondrial complexes (I, II, III and V) in liver mitochondria. QR pre‑treatment significantly modulated the activities of enzymatic antioxidants and mitochondrial complex enzymes. Furthermore, QR reversed the alterations in LPO and PC levels, and GSH and NP‑SH contents in liver mitochondria. The results of the present study suggested that QR supplementation may suppress CP‑induced mitochondrial toxicity during chemotherapy, and provides a potential prophylactic and defensive candidate for anticancer agent‑induced oxidative stress.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs.
Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-06-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆
Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-01-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680
Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Van Rosendale, John
1989-01-01
A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.
ERIC Educational Resources Information Center
Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik
2013-01-01
space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…
Gaziano, J Michael; Cincotta, Anthony H; Vinik, Aaron; Blonde, Lawrence; Bohannon, Nancy; Scranton, Richard
2012-10-01
Bromocriptine-QR (a quick-release formulation of bromocriptine mesylate), a dopamine D2 receptor agonist, is a US Food and Drug Administrration-approved treatment for type 2 diabetes mellitus (T2DM). A 3070-subject randomized trial demonstrated a significant, 40% reduction in relative risk among bromocriptine-QR-treated subjects in a prespecified composite cardiovascular (CV) end point that included ischemic-related (myocardial infarction and stroke) and nonischemic-related (hospitalization for unstable angina, congestive heart failure [CHF], or revascularization surgery) end points, but did not include cardiovascular death as a component of this composite. The present investigation was undertaken to more critically evaluate the impact of bromocriptine-QR on cardiovascular outcomes in this study subject population by (1) including CV death in the above-described original composite analysis and then stratifying this new analysis on the basis of multiple demographic subgroups and (2) analyzing the influence of this intervention on only the "hard" CV end points of myocardial infarction, stroke, and CV death (major adverse cardiovascular events [MACEs]). Three thousand seventy T2DM subjects on stable doses of ≤2 antidiabetes medications (including insulin) with HbA1c ≤10.0 (average baseline HbA1c=7.0) were randomized 2:1 to bromocriptine-QR (1.6 to 4.8 mg/day) or placebo for a 52-week treatment period. Subjects with heart failure (New York Heart Classes I and II) and precedent myocardial infarction or revascularization surgery were allowed to participate in the trial. Study outcomes included time to first event for each of the 2 CV composite end points described above. The relative risk comparing bromocriptine-QR with the control for the cardiovascular outcomes was estimated as a hazard ratio with 95% confidence interval on the basis of Cox proportional hazards regression. The statistical significance of any between-group difference in the cumulative percentage of CV events over time (derived from a Kaplan-Meier curve) was determined by a log-rank test on the intention-to-treat population. Study subjects were in reasonable metabolic control, with an average baseline HbA1c of 7.0±1.1, blood pressure of 128/76±14/9, and total and LDL cholesterol of 179±42 and 98±32, respectively, with 88%, 77%, and 69% of subjects being treated with antidiabetic, antihypertensive, and antihyperlipidemic agents, respectively. Ninety-one percent of the expected person-year outcome ascertainment was obtained in this study. Respecting the CV-inclusive composite cardiovascular end point, there were 39 events (1.9%) among 2054 bromocriptine-QR-treated subjects versus 33 events (3.2%) among 1016 placebo subjects, yielding a significant, 39% reduction in relative risk in this end point with bromocriptine-QR exposure (P=0.0346; log-rank test) that was not influenced by age, sex, race, body mass index, duration of diabetes, or preexisting cardiovascular disease. In addition, regarding the MACE end point, there were 14 events (0.7%) among 2054 bromocriptine-QR-treated subjects and 15 events (1.5%) among 1016 placebo-treated subjects, yielding a significant, 52% reduction in relative risk in this end point with bromocriptine-QR exposure (P<0.05; log-rank test). These findings reaffirm and extend the original observation of relative risk reduction in cardiovascular adverse events among type 2 diabetes subjects treated with bromocriptine-QR and suggest that further investigation into this impact of bromocriptine-QR is warranted. URL: http://clinicaltrials.gov. Unique Identifier: NCT00377676.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.
Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2012-07-01
Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.
An efficient parallel algorithm for matrix-vector multiplication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, B.; Leland, R.; Plimpton, S.
The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less
Hybrid massively parallel fast sweeping method for static Hamilton–Jacobi equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detrixhe, Miles, E-mail: mdetrixhe@engineering.ucsb.edu; University of California Santa Barbara, Santa Barbara, CA, 93106; Gibou, Frédéric, E-mail: fgibou@engineering.ucsb.edu
The fast sweeping method is a popular algorithm for solving a variety of static Hamilton–Jacobi equations. Fast sweeping algorithms for parallel computing have been developed, but are severely limited. In this work, we present a multilevel, hybrid parallel algorithm that combines the desirable traits of two distinct parallel methods. The fine and coarse grained components of the algorithm take advantage of heterogeneous computer architecture common in high performance computing facilities. We present the algorithm and demonstrate its effectiveness on a set of example problems including optimal control, dynamic games, and seismic wave propagation. We give results for convergence, parallel scaling,more » and show state-of-the-art speedup values for the fast sweeping method.« less
Optimal Design of Passive Power Filters Based on Pseudo-parallel Genetic Algorithm
NASA Astrophysics Data System (ADS)
Li, Pei; Li, Hongbo; Gao, Nannan; Niu, Lin; Guo, Liangfeng; Pei, Ying; Zhang, Yanyan; Xu, Minmin; Chen, Kerui
2017-05-01
The economic costs together with filter efficiency are taken as targets to optimize the parameter of passive filter. Furthermore, the method of combining pseudo-parallel genetic algorithm with adaptive genetic algorithm is adopted in this paper. In the early stages pseudo-parallel genetic algorithm is introduced to increase the population diversity, and adaptive genetic algorithm is used in the late stages to reduce the workload. At the same time, the migration rate of pseudo-parallel genetic algorithm is improved to change with population diversity adaptively. Simulation results show that the filter designed by the proposed method has better filtering effect with lower economic cost, and can be used in engineering.
matK-QR classifier: a patterns based approach for plant species identification.
More, Ravi Prabhakar; Mane, Rupali Chandrashekhar; Purohit, Hemant J
2016-01-01
DNA barcoding is widely used and most efficient approach that facilitates rapid and accurate identification of plant species based on the short standardized segment of the genome. The nucleotide sequences of maturaseK ( matK ) and ribulose-1, 5-bisphosphate carboxylase ( rbcL ) marker loci are commonly used in plant species identification. Here, we present a new and highly efficient approach for identifying a unique set of discriminating nucleotide patterns to generate a signature (i.e. regular expression) for plant species identification. In order to generate molecular signatures, we used matK and rbcL loci datasets, which encompass 125 plant species in 52 genera reported by the CBOL plant working group. Initially, we performed Multiple Sequence Alignment (MSA) of all species followed by Position Specific Scoring Matrix (PSSM) for both loci to achieve a percentage of discrimination among species. Further, we detected Discriminating Patterns (DP) at genus and species level using PSSM for the matK dataset. Combining DP and consecutive pattern distances, we generated molecular signatures for each species. Finally, we performed a comparative assessment of these signatures with the existing methods including BLASTn, Support Vector Machines (SVM), Jrip-RIPPER, J48 (C4.5 algorithm), and the Naïve Bayes (NB) methods against NCBI-GenBank matK dataset. Due to the higher discrimination success obtained with the matK as compared to the rbcL , we selected matK gene for signature generation. We generated signatures for 60 species based on identified discriminating patterns at genus and species level. Our comparative assessment results suggest that a total of 46 out of 60 species could be correctly identified using generated signatures, followed by BLASTn (34 species), SVM (18 species), C4.5 (7 species), NB (4 species) and RIPPER (3 species) methods As a final outcome of this study, we converted signatures into QR codes and developed a software matK -QR Classifier (http://www.neeri.res.in/matk_classifier/index.htm), which search signatures in the query matK gene sequences and predict corresponding plant species. This novel approach of employing pattern-based signatures opens new avenues for the classification of species. In addition to existing methods, we believe that matK -QR Classifier would be a valuable tool for molecular taxonomists enabling precise identification of plant species.
Parallel Directionally Split Solver Based on Reformulation of Pipelined Thomas Algorithm
NASA Technical Reports Server (NTRS)
Povitsky, A.
1998-01-01
In this research an efficient parallel algorithm for 3-D directionally split problems is developed. The proposed algorithm is based on a reformulated version of the pipelined Thomas algorithm that starts the backward step computations immediately after the completion of the forward step computations for the first portion of lines This algorithm has data available for other computational tasks while processors are idle from the Thomas algorithm. The proposed 3-D directionally split solver is based on the static scheduling of processors where local and non-local, data-dependent and data-independent computations are scheduled while processors are idle. A theoretical model of parallelization efficiency is used to define optimal parameters of the algorithm, to show an asymptotic parallelization penalty and to obtain an optimal cover of a global domain with subdomains. It is shown by computational experiments and by the theoretical model that the proposed algorithm reduces the parallelization penalty about two times over the basic algorithm for the range of the number of processors (subdomains) considered and the number of grid nodes per subdomain.
Parallel optimization algorithms and their implementation in VLSI design
NASA Technical Reports Server (NTRS)
Lee, G.; Feeley, J. J.
1991-01-01
Two new parallel optimization algorithms based on the simplex method are described. They may be executed by a SIMD parallel processor architecture and be implemented in VLSI design. Several VLSI design implementations are introduced. An application example is reported to demonstrate that the algorithms are effective.
A parallel time integrator for noisy nonlinear oscillatory systems
NASA Astrophysics Data System (ADS)
Subber, Waad; Sarkar, Abhijit
2018-06-01
In this paper, we adapt a parallel time integration scheme to track the trajectories of noisy non-linear dynamical systems. Specifically, we formulate a parallel algorithm to generate the sample path of nonlinear oscillator defined by stochastic differential equations (SDEs) using the so-called parareal method for ordinary differential equations (ODEs). The presence of Wiener process in SDEs causes difficulties in the direct application of any numerical integration techniques of ODEs including the parareal algorithm. The parallel implementation of the algorithm involves two SDEs solvers, namely a fine-level scheme to integrate the system in parallel and a coarse-level scheme to generate and correct the required initial conditions to start the fine-level integrators. For the numerical illustration, a randomly excited Duffing oscillator is investigated in order to study the performance of the stochastic parallel algorithm with respect to a range of system parameters. The distributed implementation of the algorithm exploits Massage Passing Interface (MPI).
A parallel variable metric optimization algorithm
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1973-01-01
An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.
Security printing of covert quick response codes using upconverting nanoparticle inks
NASA Astrophysics Data System (ADS)
Meruga, Jeevan M.; Cross, William M.; May, P. Stanley; Luu, QuocAnh; Crawford, Grant A.; Kellar, Jon J.
2012-10-01
Counterfeiting costs governments and private industries billions of dollars annually due to loss of value in currency and other printed items. This research involves using lanthanide doped β-NaYF4 nanoparticles for security printing applications. Inks comprised of Yb3+/Er3+ and Yb3+/Tm3+ doped β-NaYF4 nanoparticles with oleic acid as the capping agent in toluene and methyl benzoate with poly(methyl methacrylate) (PMMA) as the binding agent were used to print quick response (QR) codes. The QR codes were made using an AutoCAD file and printed with Optomec direct-write aerosol jetting®. The printed QR codes are invisible under ambient lighting conditions, but are readable using a near-IR laser, and were successfully scanned using a smart phone. This research demonstrates that QR codes, which have been used primarily for information sharing applications, can also be used for security purposes. Higher levels of security were achieved by printing both green and blue upconverting inks, based on combinations of Er3+/Yb3+ and Tm3+/Yb3+, respectively, in a single QR code. The near-infrared (NIR)-to-visible upconversion luminescence properties of the two-ink QR codes were analyzed, including the influence of NIR excitation power density on perceived color, in term of the CIE 1931 chromaticity index. It was also shown that this security ink can be optimized for line width, thickness and stability on different substrates.
Security printing of covert quick response codes using upconverting nanoparticle inks.
Meruga, Jeevan M; Cross, William M; Stanley May, P; Luu, QuocAnh; Crawford, Grant A; Kellar, Jon J
2012-10-05
Counterfeiting costs governments and private industries billions of dollars annually due to loss of value in currency and other printed items. This research involves using lanthanide doped β-NaYF(4) nanoparticles for security printing applications. Inks comprised of Yb(3+)/Er(3+) and Yb(3+)/Tm(3+) doped β-NaYF(4) nanoparticles with oleic acid as the capping agent in toluene and methyl benzoate with poly(methyl methacrylate) (PMMA) as the binding agent were used to print quick response (QR) codes. The QR codes were made using an AutoCAD file and printed with Optomec direct-write aerosol jetting(®). The printed QR codes are invisible under ambient lighting conditions, but are readable using a near-IR laser, and were successfully scanned using a smart phone. This research demonstrates that QR codes, which have been used primarily for information sharing applications, can also be used for security purposes. Higher levels of security were achieved by printing both green and blue upconverting inks, based on combinations of Er(3+)/Yb(3+) and Tm(3+)/Yb(3+), respectively, in a single QR code. The near-infrared (NIR)-to-visible upconversion luminescence properties of the two-ink QR codes were analyzed, including the influence of NIR excitation power density on perceived color, in term of the CIE 1931 chromaticity index. It was also shown that this security ink can be optimized for line width, thickness and stability on different substrates.
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926
A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)
NASA Technical Reports Server (NTRS)
Straeter, T. A.; Markos, A. T.
1975-01-01
A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.
Rapid code acquisition algorithms employing PN matched filters
NASA Technical Reports Server (NTRS)
Su, Yu T.
1988-01-01
The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.
Algorithms and programming tools for image processing on the MPP
NASA Technical Reports Server (NTRS)
Reeves, A. P.
1985-01-01
Topics addressed include: data mapping and rotational algorithms for the Massively Parallel Processor (MPP); Parallel Pascal language; documentation for the Parallel Pascal Development system; and a description of the Parallel Pascal language used on the MPP.
Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan
2016-01-01
A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987
Applications and accuracy of the parallel diagonal dominant algorithm
NASA Technical Reports Server (NTRS)
Sun, Xian-He
1993-01-01
The Parallel Diagonal Dominant (PDD) algorithm is a highly efficient, ideally scalable tridiagonal solver. In this paper, a detailed study of the PDD algorithm is given. First the PDD algorithm is introduced. Then the algorithm is extended to solve periodic tridiagonal systems. A variant, the reduced PDD algorithm, is also proposed. Accuracy analysis is provided for a class of tridiagonal systems, the symmetric, and anti-symmetric Toeplitz tridiagonal systems. Implementation results show that the analysis gives a good bound on the relative error, and the algorithm is a good candidate for the emerging massively parallel machines.
Parallel Computing Strategies for Irregular Algorithms
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Shan, Hongzhang; Biegel, Bryan (Technical Monitor)
2002-01-01
Parallel computing promises several orders of magnitude increase in our ability to solve realistic computationally-intensive problems, but relies on their efficient mapping and execution on large-scale multiprocessor architectures. Unfortunately, many important applications are irregular and dynamic in nature, making their effective parallel implementation a daunting task. Moreover, with the proliferation of parallel architectures and programming paradigms, the typical scientist is faced with a plethora of questions that must be answered in order to obtain an acceptable parallel implementation of the solution algorithm. In this paper, we consider three representative irregular applications: unstructured remeshing, sparse matrix computations, and N-body problems, and parallelize them using various popular programming paradigms on a wide spectrum of computer platforms ranging from state-of-the-art supercomputers to PC clusters. We present the underlying problems, the solution algorithms, and the parallel implementation strategies. Smart load-balancing, partitioning, and ordering techniques are used to enhance parallel performance. Overall results demonstrate the complexity of efficiently parallelizing irregular algorithms.
Tian, Hua; Wang, Xueying; Shu, Gequn; Wu, Mingqiang; Yan, Nanhua; Ma, Xiaonan
2017-09-15
Mixture of hydrocarbon and carbon dioxide shows excellent cycle performance in Organic Rankine Cycle (ORC) used for engine waste heat recovery, but the unavoidable leakage in practical application is a threat for safety due to its flammability. In this work, a quantitative risk assessment system (QR-AS) is established aiming at providing a general method of risk assessment for flammable working fluid leakage. The QR-AS covers three main aspects: analysis of concentration distribution based on CFD simulations, explosive risk assessment based on the TNT equivalent method and risk mitigation based on evaluation results. A typical case of propane/carbon dioxide mixture leaking from ORC is investigated to illustrate the application of QR-AS. According to the assessment results, proper ventilation speed, safe mixture ratio and location of gas-detecting devices have been proposed to guarantee the security in case of leakage. The results revealed that this presented QR-AS was reliable for the practical application and the evaluation results could provide valuable guidance for the design of mitigation measures to improve the safe performance of ORC system. Copyright © 2017 Elsevier B.V. All rights reserved.
Experiencing teaching and learning quantitative reasoning in a project-based context
NASA Astrophysics Data System (ADS)
Muir, Tracey; Beswick, Kim; Callingham, Rosemary; Jade, Katara
2016-12-01
This paper presents the findings of a small-scale study that investigated the issues and challenges of teaching and learning about quantitative reasoning (QR) within a project-based learning (PjBL) context. Students and teachers were surveyed and interviewed about their experiences of learning and teaching QR in that context in contrast to teaching and learning mathematics in more traditional settings. The grade 9-12 student participants were characterised by a history of disengagement with mathematics and school in general, and the teacher participants were non-mathematics specialist teachers. Both students and teachers were new to the PjBL situation, which resulted in the teaching/learning relationship being a reciprocal one. The findings indicated that students and teachers viewed QR positively, particularly when compared with traditional mathematics teaching, yet tensions were identified for aspects such as implementation of curriculum and integration of relevant mathematics into projects. Both sets of participants identified situations where learning QR was particularly successful, along with concerns or difficulties about integrating QR into project work. The findings have implications for educators, who may need to examine their own approaches to mathematics teaching, particularly in terms of facilitating student engagement with the subject.
Sagar, Vatsala; Chaturvedi, Sumit K; Schuck, Peter; Wistow, Graeme
2017-07-05
Previous attempts to crystallize mammalian γS-crystallin were unsuccessful. Native L16 chicken γS crystallized avidly while the Q16 mutant did not. The X-ray structure for chicken γS at 2.3 Å resolution shows the canonical structure of the superfamily plus a well-ordered N arm aligned with a β sheet of a neighboring N domain. L16 is also in a lattice contact, partially shielded from solvent. Unexpectedly, the major lattice contact matches a conserved interface (QR) in the multimeric β-crystallins. QR shows little conservation of residue contacts, except for one between symmetry-related tyrosines, but molecular dipoles for the proteins with QR show striking similarities while other γ-crystallins differ. In γS, QR has few hydrophobic contacts and features a thin layer of tightly bound water. The free energy of QR is slightly repulsive and analytical ultracentrifugation confirms no dimerization in solution. The lattice contacts suggest how γ-crystallins allow close packing without aggregation in the crowded environment of the lens. Published by Elsevier Ltd.
Ramirez, Lisa Marie S; He, Muhan; Mailloux, Shay; George, Justin; Wang, Jun
2016-06-01
Microparticles carrying quick response (QR) barcodes are fabricated by J. Wang and co-workers on page 3259, using a massive coding of dissociated elements (MiCODE) technology. Each microparticle can bear a special custom-designed QR code that enables encryption or tagging with unlimited multiplexity, and the QR code can be easily read by cellphone applications. The utility of MiCODE particles in multiplexed DNA detection and microtagging for anti-counterfeiting is explored. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sublattice parallel replica dynamics.
Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F
2014-06-01
Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.
NASA Astrophysics Data System (ADS)
You, Minli; Lin, Min; Wang, Shurui; Wang, Xuemin; Zhang, Ge; Hong, Yuan; Dong, Yuqing; Jin, Guorui; Xu, Feng
2016-05-01
Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting.Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting. Electronic supplementary information (ESI) available: Calculating details of UCNP content per 3D QR code and decoding process of the 3D QR code. See DOI: 10.1039/c6nr01353h
Ku, Kang-Mo; Jeffery, Elizabeth H.; Juvik, John A.
2014-01-01
Methyl jasmonate (MeJA) spray treatments were applied to the kale varieties ‘Dwarf Blue Curled Vates’ and ‘Red Winter’ in replicated field plantings in 2010 and 2011 to investigate alteration of glucosinolate (GS) composition in harvested leaf tissue. Aqueous solutions of 250 µM MeJA were sprayed to saturation on aerial plant tissues four days prior to harvest at commercial maturity. The MeJA treatment significantly increased gluconasturtiin (56%), glucobrassicin (98%), and neoglucobrassicin (150%) concentrations in the apical leaf tissue of these genotypes over two seasons. Induction of quinone reductase (QR) activity, a biomarker for anti-carcinogenesis, was significantly increased by the extracts from the leaf tissue of these two cultivars. Extracts of apical leaf tissues had greater MeJA mediated increases in phenolics, glucosinolate concentrations, GS hydrolysis products, and QR activity than extracts from basal leaf tissue samples. The concentration of the hydrolysis product of glucoraphanin, sulforphane was significantly increased in apical leaf tissue of the cultivar ‘Red Winter’ in both 2010 and 2011. There was interaction between exogenous MeJA treatment and environmental conditions to induce endogenous JA. Correlation analysis revealed that indole-3-carbanol (I3C) generated from the hydrolysis of glucobrassicin significantly correlated with QR activity (r = 0.800, P<0.001). Concentrations required to double the specific QR activity (CD values) of I3C was calculated at 230 µM, which is considerably weaker at induction than other isothiocyanates like sulforphane. To confirm relationships between GS hydrolysis products and QR activity, a range of concentrations of MeJA sprays were applied to kale leaf tissues of both cultivars in 2011. Correlation analysis of these results indicated that sulforaphane, NI3C, neoascorbigen, I3C, and diindolylmethane were all significantly correlated with QR activity. Thus, increased QR activity may be due to combined increases in phenolics (quercetin and kaempferol) and GS hydrolysis product concentrations rather than by individual products alone. PMID:25084454
EGL-20/Wnt and MAB-5/Hox Act Sequentially to Inhibit Anterior Migration of Neuroblasts in C. elegans
Josephson, Matthew P.; Chai, Yongping; Ou, Guangshuo; Lundquist, Erik A.
2016-01-01
Directed neuroblast and neuronal migration is important in the proper development of nervous systems. In C. elegans the bilateral Q neuroblasts QR (on the right) and QL (on the left) undergo an identical pattern of cell division and differentiation but migrate in opposite directions (QR and descendants anteriorly and QL and descendants posteriorly). EGL-20/Wnt, via canonical Wnt signaling, drives the expression of MAB-5/Hox in QL but not QR. MAB-5 acts as a determinant of posterior migration, and mab-5 and egl-20 mutants display anterior QL descendant migrations. Here we analyze the behaviors of QR and QL descendants as they begin their anterior and posterior migrations, and the effects of EGL-20 and MAB-5 on these behaviors. The anterior and posterior daughters of QR (QR.a/p) after the first division immediately polarize and begin anterior migration, whereas QL.a/p remain rounded and non-migratory. After ~1 hour, QL.a migrates posteriorly over QL.p. We find that in egl-20/Wnt, bar-1/β-catenin, and mab-5/Hox mutants, QL.a/p polarize and migrate anteriorly, indicating that these molecules normally inhibit anterior migration of QL.a/p. In egl-20/Wnt mutants, QL.a/p immediately polarize and begin migration, whereas in bar-1/β-catenin and mab-5/Hox, the cells transiently retain a rounded, non-migratory morphology before anterior migration. Thus, EGL-20/Wnt mediates an acute inhibition of anterior migration independently of BAR-1/β-catenin and MAB-5/Hox, and a later, possible transcriptional response mediated by BAR-1/β-catenin and MAB-5/Hox. In addition to inhibiting anterior migration, MAB-5/Hox also cell-autonomously promotes posterior migration of QL.a (and QR.a in a mab-5 gain-of-function). PMID:26863303
NASA Astrophysics Data System (ADS)
Zahra, H.; Elmaghroui, D.; Fezai, I.; Jaziri, S.
2016-11-01
We theoretically investigate the energy transfer between a CdSe/CdS Quantum-dot/Quantum-rod (QD/QR) core/shell structure and a weakly doped graphene layer, separated by a dielectric spacer. A numerical method assuming the realistic shape of the type I and quasi-type II CdSe/CdS QD/QR is developed in order to calculate their energy structure. An electric field is applied for both types to manipulate the carriers localization and the exciton energy. Our evaluation for the isolated QD/QR shows that a quantum confined Stark effect can be obtained with large negative electric filed while a small effect is observed with positive ones. Owing to the evolution of the carriers delocalization and their excitonic energy versus the electric field, both type I and quasi-type II QD/QR donors are suitable as sources of charge and energy. With a view to improve its absorption, the graphene sheet (acceptor) is placed at different distances from the QD/QR (donor). Using the random phase approximation and the massless Dirac Fermi approximation, the quenching rate integral is exactly evaluated. That reveals a high transfer rate that can be obtained with type I QD/QR with no dependence on the electric field. On the contrary, a high dependence is obtained for the quasi-type II donor and a high fluorescence rate from F = 80 kV/cm. Rather than the exciton energy, the transition dipole is found to be responsible for the evolution of the fluorescence rate. We find also that the fluorescence rate decreases with increasing the spacer thickness and shows a power low dependence. The QD/QR fluorescence quenching can be observed up to large distance which is estimated to be dependent only on the donor exciton energy.
[The QR code in society, economy and medicine--fields of application, options and chances].
Flaig, Benno; Parzeller, Markus
2011-01-01
2D codes like the QR Code ("Quick Response") are becoming more and more common in society and medicine. The application spectrum and benefits in medicine and other fields are described. 2D codes can be created free of charge on any computer with internet access without any previous knowledge. The codes can be easily used in publications, presentations, on business cards and posters. Editors choose between contact details, text or a hyperlink as information behind the code. At expert conferences, linkage by QR Code allows the audience to download presentations and posters quickly. The documents obtained can then be saved, printed, processed etc. Fast access to stored data in the internet makes it possible to integrate additional and explanatory multilingual videos into medical posters. In this context, a combination of different technologies (printed handout, QR Code and screen) may be reasonable.
Rekadwad, Bhagwan N; Khobragade, Chandrahasya N
2016-06-01
Microbiologists are routinely engaged isolation, identification and comparison of isolated bacteria for their novelty. 16S rRNA sequences of Bacillus pumilus were retrieved from NCBI repository and generated QR codes for sequences (FASTA format and full Gene Bank information). 16SrRNA were used to generate quick response (QR) codes of Bacillus pumilus isolated from Lonar Crator Lake (19° 58' N; 76° 31' E), India. Bacillus pumilus 16S rRNA gene sequences were used to generate CGR, FCGR and PCA. These can be used for visual comparison and evaluation respectively. The hyperlinked QR codes, CGR, FCGR and PCA of all the isolates are made available to the users on a portal https://sites.google.com/site/bhagwanrekadwad/. This generated digital data helps to evaluate and compare any Bacillus pumilus strain, minimizes laboratory efforts and avoid misinterpretation of the species.
Parallel Algorithms and Patterns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robey, Robert W.
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.
Bhandarkar, S M; Chirravuri, S; Arnold, J
1996-01-01
Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
Efficient implementation of parallel three-dimensional FFT on clusters of PCs
NASA Astrophysics Data System (ADS)
Takahashi, Daisuke
2003-05-01
In this paper, we propose a high-performance parallel three-dimensional fast Fourier transform (FFT) algorithm on clusters of PCs. The three-dimensional FFT algorithm can be altered into a block three-dimensional FFT algorithm to reduce the number of cache misses. We show that the block three-dimensional FFT algorithm improves performance by utilizing the cache memory effectively. We use the block three-dimensional FFT algorithm to implement the parallel three-dimensional FFT algorithm. We succeeded in obtaining performance of over 1.3 GFLOPS on an 8-node dual Pentium III 1 GHz PC SMP cluster.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.
Concurrent computation of attribute filters on shared memory parallel machines.
Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold
2008-10-01
Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
Gaziano, J. Michael; Cincotta, Anthony H.; Vinik, Aaron; Blonde, Lawrence; Bohannon, Nancy; Scranton, Richard
2012-01-01
Background Bromocriptine-QR (a quick-release formulation of bromocriptine mesylate), a dopamine D2 receptor agonist, is a US Food and Drug Administrration–approved treatment for type 2 diabetes mellitus (T2DM). A 3070-subject randomized trial demonstrated a significant, 40% reduction in relative risk among bromocriptine-QR-treated subjects in a prespecified composite cardiovascular (CV) end point that included ischemic-related (myocardial infarction and stroke) and nonischemic-related (hospitalization for unstable angina, congestive heart failure [CHF], or revascularization surgery) end points, but did not include cardiovascular death as a component of this composite. The present investigation was undertaken to more critically evaluate the impact of bromocriptine-QR on cardiovascular outcomes in this study subject population by (1) including CV death in the above-described original composite analysis and then stratifying this new analysis on the basis of multiple demographic subgroups and (2) analyzing the influence of this intervention on only the “hard” CV end points of myocardial infarction, stroke, and CV death (major adverse cardiovascular events [MACEs]). Methods and Results Three thousand seventy T2DM subjects on stable doses of ≤2 antidiabetes medications (including insulin) with HbA1c ≤10.0 (average baseline HbA1c=7.0) were randomized 2:1 to bromocriptine-QR (1.6 to 4.8 mg/day) or placebo for a 52-week treatment period. Subjects with heart failure (New York Heart Classes I and II) and precedent myocardial infarction or revascularization surgery were allowed to participate in the trial. Study outcomes included time to first event for each of the 2 CV composite end points described above. The relative risk comparing bromocriptine-QR with the control for the cardiovascular outcomes was estimated as a hazard ratio with 95% confidence interval on the basis of Cox proportional hazards regression. The statistical significance of any between-group difference in the cumulative percentage of CV events over time (derived from a Kaplan–Meier curve) was determined by a log-rank test on the intention-to-treat population. Study subjects were in reasonable metabolic control, with an average baseline HbA1c of 7.0±1.1, blood pressure of 128/76±14/9, and total and LDL cholesterol of 179±42 and 98±32, respectively, with 88%, 77%, and 69% of subjects being treated with antidiabetic, antihypertensive, and antihyperlipidemic agents, respectively. Ninety-one percent of the expected person-year outcome ascertainment was obtained in this study. Respecting the CV-inclusive composite cardiovascular end point, there were 39 events (1.9%) among 2054 bromocriptine-QR-treated subjects versus 33 events (3.2%) among 1016 placebo subjects, yielding a significant, 39% reduction in relative risk in this end point with bromocriptine-QR exposure (P=0.0346; log-rank test) that was not influenced by age, sex, race, body mass index, duration of diabetes, or preexisting cardiovascular disease. In addition, regarding the MACE end point, there were 14 events (0.7%) among 2054 bromocriptine-QR-treated subjects and 15 events (1.5%) among 1016 placebo-treated subjects, yielding a significant, 52% reduction in relative risk in this end point with bromocriptine-QR exposure (P<0.05; log-rank test). Conclusions These findings reaffirm and extend the original observation of relative risk reduction in cardiovascular adverse events among type 2 diabetes subjects treated with bromocriptine-QR and suggest that further investigation into this impact of bromocriptine-QR is warranted. Clinical Trial Registration URL: http://clinicaltrials.gov. Unique Identifier: NCT00377676 PMID:23316290
A new scheduling algorithm for parallel sparse LU factorization with static pivoting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigori, Laura; Li, Xiaoye S.
2002-08-20
In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.
Parallel conjugate gradient algorithms for manipulator dynamic simulation
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheld, Robert E.
1989-01-01
Parallel conjugate gradient algorithms for the computation of multibody dynamics are developed for the specialized case of a robot manipulator. For an n-dimensional positive-definite linear system, the Classical Conjugate Gradient (CCG) algorithms are guaranteed to converge in n iterations, each with a computation cost of O(n); this leads to a total computational cost of O(n sq) on a serial processor. A conjugate gradient algorithms is presented that provide greater efficiency using a preconditioner, which reduces the number of iterations required, and by exploiting parallelism, which reduces the cost of each iteration. Two Preconditioned Conjugate Gradient (PCG) algorithms are proposed which respectively use a diagonal and a tridiagonal matrix, composed of the diagonal and tridiagonal elements of the mass matrix, as preconditioners. Parallel algorithms are developed to compute the preconditioners and their inversions in O(log sub 2 n) steps using n processors. A parallel algorithm is also presented which, on the same architecture, achieves the computational time of O(log sub 2 n) for each iteration. Simulation results for a seven degree-of-freedom manipulator are presented. Variants of the proposed algorithms are also developed which can be efficiently implemented on the Robot Mathematics Processor (RMP).
GPU-completeness: theory and implications
NASA Astrophysics Data System (ADS)
Lin, I.-Jong
2011-01-01
This paper formalizes a major insight into a class of algorithms that relate parallelism and performance. The purpose of this paper is to define a class of algorithms that trades off parallelism for quality of result (e.g. visual quality, compression rate), and we propose a similar method for algorithmic classification based on NP-Completeness techniques, applied toward parallel acceleration. We will define this class of algorithm as "GPU-Complete" and will postulate the necessary properties of the algorithms for admission into this class. We will also formally relate his algorithmic space and imaging algorithms space. This concept is based upon our experience in the print production area where GPUs (Graphic Processing Units) have shown a substantial cost/performance advantage within the context of HPdelivered enterprise services and commercial printing infrastructure. While CPUs and GPUs are converging in their underlying hardware and functional blocks, their system behaviors are clearly distinct in many ways: memory system design, programming paradigms, and massively parallel SIMD architecture. There are applications that are clearly suited to each architecture: for CPU: language compilation, word processing, operating systems, and other applications that are highly sequential in nature; for GPU: video rendering, particle simulation, pixel color conversion, and other problems clearly amenable to massive parallelization. While GPUs establishing themselves as a second, distinct computing architecture from CPUs, their end-to-end system cost/performance advantage in certain parts of computation inform the structure of algorithms and their efficient parallel implementations. While GPUs are merely one type of architecture for parallelization, we show that their introduction into the design space of printing systems demonstrate the trade-offs against competing multi-core, FPGA, and ASIC architectures. While each architecture has its own optimal application, we believe that the selection of architecture can be defined in terms of properties of GPU-Completeness. For a welldefined subset of algorithms, GPU-Completeness is intended to connect the parallelism, algorithms and efficient architectures into a unified framework to show that multiple layers of parallel implementation are guided by the same underlying trade-off.
Crashworthiness simulations with DYNA3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauer, D.A.; Hoover, C.G.; Kay, G.J.
1996-04-01
Current progress in parallel algorithm research and applications in vehicle crash simulation is described for the explicit, finite element algorithms in DYNA3D. Problem partitioning methods and parallel algorithms for contact at material interfaces are the two challenging algorithm research problems that are addressed. Two prototype parallel contact algorithms have been developed for treating the cases of local and arbitrary contact. Demonstration problems for local contact are crashworthiness simulations with 222 locally defined contact surfaces and a vehicle/barrier collision modeled with arbitrary contact. A simulation of crash tests conducted for a vehicle impacting a U-channel small sign post embedded in soilmore » has been run on both the serial and parallel versions of DYNA3D. A significant reduction in computational time has been observed when running these problems on the parallel version. However, to achieve maximum efficiency, complex problems must be appropriately partitioned, especially when contact dominates the computation.« less
Rethinking mobile delivery: using Quick Response codes to access information at the point of need.
Lombardo, Nancy T; Morrow, Anne; Le Ber, Jeanne
2012-01-01
This article covers the use of Quick Response (QR) codes to provide instant mobile access to information, digital collections, educational offerings, library website, subject guides, text messages, videos, and library personnel. The array of uses and the value of using QR codes to push customized information to patrons are explained. A case is developed for using QR codes for mobile delivery of customized information to patrons. Applications in use at the Libraries of the University of Utah will be reviewed to provide readers with ideas for use in their library. Copyright © Taylor & Francis Group, LLC
Epitaxial growth of quantum rods with high aspect ratio and compositional contrast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L. H.; Patriarche, G.; Fiore, A.
2008-12-01
The epitaxial growth of quantum rods (QRs) on GaAs was investigated. It was found that GaAs thickness in the GaAs/InAs superlattice used for QR formation plays a key role in improving the QR structural properties. Increasing the GaAs thickness results in both an increased In compositional contrast between the QRs and surrounding layer, and an increased QR length. QRs with an aspect ratio of up to 10 were obtained, representing quasiquantum wires in a GaAs matrix. Due to modified confinement and strain potential, such nanostructure is promising for controlling gain polarization.
Line-drawing algorithms for parallel machines
NASA Technical Reports Server (NTRS)
Pang, Alex T.
1990-01-01
The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.
NASA Astrophysics Data System (ADS)
Hoss, F.; Fischbeck, P. S.
2014-10-01
This study further develops the method of quantile regression (QR) to predict exceedance probabilities of flood stages by post-processing forecasts. Using data from the 82 river gages, for which the National Weather Service's North Central River Forecast Center issues forecasts daily, this is the first QR application to US American river gages. Archived forecasts for lead times up to six days from 2001-2013 were analyzed. Earlier implementations of QR used the forecast itself as the only independent variable (Weerts et al., 2011; López López et al., 2014). This study adds the rise rate of the river stage in the last 24 and 48 h and the forecast error 24 and 48 h ago to the QR model. Including those four variables significantly improved the forecasts, as measured by the Brier Skill Score (BSS). Mainly, the resolution increases, as the original QR implementation already delivered high reliability. Combining the forecast with the other four variables results in much less favorable BSSs. Lastly, the forecast performance does not depend on the size of the training dataset, but on the year, the river gage, lead time and event threshold that are being forecast. We find that each event threshold requires a separate model configuration or at least calibration.
Afifi, Nehal A; Ibrahim, Marwa A; Galal, Mona K
2018-06-01
Despite all the studies performed to date, therapy choices for liver injuries are very few. Therefore, the search for a new treatment that could safely and effectively block or reverse liver injuries remains a challenge. Quercetin (QR) and ellagic acid (EA) had potent antioxidant and anti-inflammatory activities. The current study aimed at evaluating the potential hepatoprotective influence of QR and EA against thioacetamide (TAA)-induced liver toxicity in rats and the underlying mechanism using silymarin as a reference drug. Fifty mature male rats were orally treated daily with EA and QR in separate groups for 45 consecutive days, and then were injected with TAA twice with 24 h intervals in the last 2 days of the experiment. Administration of TAA resulted in marked elevation of liver indices, alteration in oxidative stress parameters, and significant elevation in expression level of fibrosis-related genes (MMP9 and MMP2). Administration of QR and EA significantly attenuated the hepatic toxicity through reduction of liver biomarkers, improving the redox status of the tissue, as well as hampering the expression level of fibrosis-related genes. In this study, QR and EA were proved to attenuate the hepatotoxicity through their antioxidant, metal-chelating capacity, and anti-inflammatory effects.
Electrochemical Impedance Analysis of a PEDOT:PSS-Based Textile Energy Storage Device
Gokceoren, Argun Talat; Odhiambo, Sheilla Atieno; De Mey, Gilbert; Hertleer, Carla; Van Langenhove, Lieva
2017-01-01
A textile-based energy storage device with electroactive PEDOT:PSS (poly(3,4-ethylenedioxythiophene)/poly(4-styrenesulfonate)) polymer functioning as a solid-state polyelectrolyte has been developed. The device was fabricated on textile fabric with two plies of stainless-steel electroconductive yarn as the electrodes. In this study, cyclic voltammetry and electrochemical impedance analysis were used to investigate ionic and electronic activities in the bulk of PEDOT:PSS and at its interfaces with stainless steel yarn electrodes. The complex behavior of ionic and electronic origins was observed in the interfacial region between the conductive polymer and the electrodes. The migration and diffusion of the ions involved were confirmed by the presence of the Warburg element with a phase shift of 45° (n = 0.5). Two different equivalent circuit models were found by simulating the model with the experimental results: (QR)(QR)(QR) for uncharged and (QR)(QR)(Q(RW)) for charged samples. The analyses also showed that the further the distance between electrodes, the lower the capacitance of the cell. The distribution of polymer on the cell surface also played important role to change the capacitance of the device. The results of this work may lead to a better understanding of the mechanism and how to improve the performance of the device. PMID:29283427
NASA Astrophysics Data System (ADS)
Teng, Zhaojie; Zhang, Wenyan; Chen, Yiran; Pan, Hongmiao; Xiao, Tian; Wu, Long-Fei
2017-08-01
Magnetotactic bacteria are a group of Gram-negative bacteria that synthesize magnetic crystals, enabling them to navigate in relation to magnetic field lines. Morphologies of magnetotactic bacteria include spirillum, coccoid, rod, vibrio, and multicellular morphotypes. The coccid shape is generally the most abundant morphotype among magnetotactic bacteria. Here we describe a species of giant rod-shaped magnetotactic bacteria (designated QR-1) collected from sediment in the low tide zone of Huiquan Bay (Yellow Sea, China). This morphotype accounted for 90% of the magnetotactic bacteria collected, and the only taxonomic group which was detected in the sampling site. Microscopy analysis revealed that QR-1 cells averaged (6.71±1.03)×(1.54±0.20) μm in size, and contained in each cell 42-146 magnetosomes that are arranged in a bundle formed one to four chains along the long axis of the cell. The QR-1 cells displayed axial magnetotaxis with an average velocity of 70±28 μm/s. Transmission electron microscopy based analysis showed that QR-1 cells had two tufts of flagella at each end. Phylogenetic analysis of the 16S rRNA genes revealed that QR-1 together with three other rod-shaped uncultivated magnetotactic bacteria are clustered into a deep branch of Alphaproteobacteria.
Integrating Quantitative Reasoning into STEM Courses Using an Energy and Environment Context
NASA Astrophysics Data System (ADS)
Myers, J. D.; Lyford, M. E.; Mayes, R. L.
2010-12-01
Many secondary and post-secondary science classes do not integrate math into their curriculum, while math classes commonly teach concepts without meaningful context. Consequently, students lack basic quantitative skills and the ability to apply them in real-world contexts. For the past three years, a Wyoming Department of Education funded Math Science Partnership at the University of Wyoming (UW) has brought together middle and high school science and math teachers to model how math and science can be taught together in a meaningful way. The UW QR-STEM project emphasizes the importance of Quantitative Reasoning (QR) to student success in Science, Technology, Engineering and Mathematics (STEM). To provide a social context, QR-STEM has focused on energy and the environment. In particular, the project has examined how QR and STEM concepts play critical roles in many of the current global challenges of energy and environment. During four 3-day workshops each summer and over several virtual and short face-to-face meetings during the academic year, UW and community college science and math faculty work with math and science teachers from middle and high schools across the state to improve QR instruction in math and science classes. During the summer workshops, faculty from chemistry, physics, earth sciences, biology and math lead sessions to: 1) improve the basic science content knowledge of teachers; 2) improve teacher understanding of math and statistical concepts, 3) model how QR can be taught by engaging teachers in sessions that integrate math and science in an energy and environment context; and 4) focus curricula using Understanding by Design to identify enduring understandings on which to center instructional strategies and assessment. In addition to presenting content, faculty work with teachers as they develop classroom lessons and larger units to be implemented during the school year. Teachers form interdisciplinary groups which often consist of math and science teachers from the same school or district. By jointly developing units focused on energy and environment, math and science curricula can be coordinated during the school year. During development, teams present their curricular ideas for peer-review. Throughout the school year, teachers implement their units and collect pre-post data on student learning. Ultimately, science teachers integrate math into their science courses, and math teachers integrate science content in their math courses. Following implementation, participants share their experiences with their peers and faculty. Of central interest during these presentations are: 1) How did the QR-STEM experience change teacher practices in the classroom?; and 2) How did the modification of their teaching practices impact student learning and their ability to successfully master QR? The UW QR-STEM has worked with Wyoming science and math teachers from across the state over the three year grant period.
Multiprocessing the Sieve of Eratosthenes
NASA Technical Reports Server (NTRS)
Bokhari, S.
1986-01-01
The Sieve of Eratosthenes for finding prime numbers in recent years has seen much use as a benchmark algorithm for serial computers while its intrinsically parallel nature has gone largely unnoticed. The implementation of a parallel version of this algorithm for a real parallel computer, the Flex/32, is described and its performance discussed. It is shown that the algorithm is sensitive to several fundamental performance parameters of parallel machines, such as spawning time, signaling time, memory access, and overhead of process switching. Because of the nature of the algorithm, it is impossible to get any speedup beyond 4 or 5 processors unless some form of dynamic load balancing is employed. We describe the performance of our algorithm with and without load balancing and compare it with theoretical lower bounds and simulated results. It is straightforward to understand this algorithm and to check the final results. However, its efficient implementation on a real parallel machine requires thoughtful design, especially if dynamic load balancing is desired. The fundamental operations required by the algorithm are very simple: this means that the slightest overhead appears prominently in performance data. The Sieve thus serves not only as a very severe test of the capabilities of a parallel processor but is also an interesting challenge for the programmer.
A Parallel Rendering Algorithm for MIMD Architectures
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.; Orloff, Tobias
1991-01-01
Applications such as animation and scientific visualization demand high performance rendering of complex three dimensional scenes. To deliver the necessary rendering rates, highly parallel hardware architectures are required. The challenge is then to design algorithms and software which effectively use the hardware parallelism. A rendering algorithm targeted to distributed memory MIMD architectures is described. For maximum performance, the algorithm exploits both object-level and pixel-level parallelism. The behavior of the algorithm is examined both analytically and experimentally. Its performance for large numbers of processors is found to be limited primarily by communication overheads. An experimental implementation for the Intel iPSC/860 shows increasing performance from 1 to 128 processors across a wide range of scene complexities. It is shown that minimal modifications to the algorithm will adapt it for use on shared memory architectures as well.
A highly efficient multi-core algorithm for clustering extremely large datasets
2010-01-01
Background In recent years, the demand for computational power in computational biology has increased due to rapidly growing data sets from microarray and other high-throughput technologies. This demand is likely to increase. Standard algorithms for analyzing data, such as cluster algorithms, need to be parallelized for fast processing. Unfortunately, most approaches for parallelizing algorithms largely rely on network communication protocols connecting and requiring multiple computers. One answer to this problem is to utilize the intrinsic capabilities in current multi-core hardware to distribute the tasks among the different cores of one computer. Results We introduce a multi-core parallelization of the k-means and k-modes cluster algorithms based on the design principles of transactional memory for clustering gene expression microarray type data and categorial SNP data. Our new shared memory parallel algorithms show to be highly efficient. We demonstrate their computational power and show their utility in cluster stability and sensitivity analysis employing repeated runs with slightly changed parameters. Computation speed of our Java based algorithm was increased by a factor of 10 for large data sets while preserving computational accuracy compared to single-core implementations and a recently published network based parallelization. Conclusions Most desktop computers and even notebooks provide at least dual-core processors. Our multi-core algorithms show that using modern algorithmic concepts, parallelization makes it possible to perform even such laborious tasks as cluster sensitivity and cluster number estimation on the laboratory computer. PMID:20370922
A sweep algorithm for massively parallel simulation of circuit-switched networks
NASA Technical Reports Server (NTRS)
Gaujal, Bruno; Greenberg, Albert G.; Nicol, David M.
1992-01-01
A new massively parallel algorithm is presented for simulating large asymmetric circuit-switched networks, controlled by a randomized-routing policy that includes trunk-reservation. A single instruction multiple data (SIMD) implementation is described, and corresponding experiments on a 16384 processor MasPar parallel computer are reported. A multiple instruction multiple data (MIMD) implementation is also described, and corresponding experiments on an Intel IPSC/860 parallel computer, using 16 processors, are reported. By exploiting parallelism, our algorithm increases the possible execution rate of such complex simulations by as much as an order of magnitude.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azad, Ariful; Buluc, Aydn; Pothen, Alex
It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less
Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting
Azad, Ariful; Buluc, Aydn; Pothen, Alex
2016-03-24
It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting pathmore » is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.« less
Fast parallel approach for 2-D DHT-based real-valued discrete Gabor transform.
Tao, Liang; Kwan, Hon Keung
2009-12-01
Two-dimensional fast Gabor transform algorithms are useful for real-time applications due to the high computational complexity of the traditional 2-D complex-valued discrete Gabor transform (CDGT). This paper presents two block time-recursive algorithms for 2-D DHT-based real-valued discrete Gabor transform (RDGT) and its inverse transform and develops a fast parallel approach for the implementation of the two algorithms. The computational complexity of the proposed parallel approach is analyzed and compared with that of the existing 2-D CDGT algorithms. The results indicate that the proposed parallel approach is attractive for real time image processing.
Communications oriented programming of parallel iterative solutions of sparse linear systems
NASA Technical Reports Server (NTRS)
Patrick, M. L.; Pratt, T. W.
1986-01-01
Parallel algorithms are developed for a class of scientific computational problems by partitioning the problems into smaller problems which may be solved concurrently. The effectiveness of the resulting parallel solutions is determined by the amount and frequency of communication and synchronization and the extent to which communication can be overlapped with computation. Three different parallel algorithms for solving the same class of problems are presented, and their effectiveness is analyzed from this point of view. The algorithms are programmed using a new programming environment. Run-time statistics and experience obtained from the execution of these programs assist in measuring the effectiveness of these algorithms.
Efficient parallel implementation of active appearance model fitting algorithm on GPU.
Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou
2014-01-01
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures.
Efficient Parallel Implementation of Active Appearance Model Fitting Algorithm on GPU
Wang, Jinwei; Ma, Xirong; Zhu, Yuanping; Sun, Jizhou
2014-01-01
The active appearance model (AAM) is one of the most powerful model-based object detecting and tracking methods which has been widely used in various situations. However, the high-dimensional texture representation causes very time-consuming computations, which makes the AAM difficult to apply to real-time systems. The emergence of modern graphics processing units (GPUs) that feature a many-core, fine-grained parallel architecture provides new and promising solutions to overcome the computational challenge. In this paper, we propose an efficient parallel implementation of the AAM fitting algorithm on GPUs. Our design idea is fine grain parallelism in which we distribute the texture data of the AAM, in pixels, to thousands of parallel GPU threads for processing, which makes the algorithm fit better into the GPU architecture. We implement our algorithm using the compute unified device architecture (CUDA) on the Nvidia's GTX 650 GPU, which has the latest Kepler architecture. To compare the performance of our algorithm with different data sizes, we built sixteen face AAM models of different dimensional textures. The experiment results show that our parallel AAM fitting algorithm can achieve real-time performance for videos even on very high-dimensional textures. PMID:24723812
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
Parallel/distributed direct method for solving linear systems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.
[QR-Code based patient tracking: a cost-effective option to improve patient safety].
Fischer, M; Rybitskiy, D; Strauß, G; Dietz, A; Dressler, C R
2013-03-01
Hospitals are implementing a risk management system to avoid patient or surgery mix-ups. The trend is to use preoperative checklists. This work deals specifically with a type of patient identification, which is realized by storing patient data on a patient-fixed medium. In 127 ENT surgeries data relevant for patient identification were encrypted in a 2D-QR-Code. The code, as a separate document coming with the patient chart or as a patient wristband, has been decrypted in the OR and the patient data were presented visible for all persons. The decoding time, the compliance of the patient data, as well as the duration of the patient identification was compared with the traditional patient identification by inspection of the patient chart. A total of 125 QR codes were read. The time for the decrypting of QR-Code was 5.6 s, the time for the screen view for patient identification was 7.9 s, and for a comparison group of 75 operations traditional patient identification was 27.3 s. Overall, there were 6 relevant information errors in the two parts of the experiment. This represents a ratio of 0.6% for 8 relevant classes per each encrypted QR code. This work allows a cost effective way to technically support patient identification based on electronic patient data. It was shown that the use in the clinical routine is possible. The disadvantage is a potential misinformation from incorrect or missing information in the HIS, or due to changes of the data after the code was created. The QR-code-based patient tracking is seen as a useful complement to the already widely used identification wristband. © Georg Thieme Verlag KG Stuttgart · New York.
Chamarthi, Bindu; Gaziano, J. Michael; Blonde, Lawrence; Scranton, Richard E.; Ezrokhi, Michael; Rutty, Dean; Cincotta, Anthony H.
2015-01-01
Background. Type 2 diabetes (T2DM) patients, including those in good glycemic control, have an increased risk of cardiovascular disease (CVD). Maintaining good glycemic control may reduce long-term CVD risk. However, other risk factors such as elevated vascular sympathetic tone and/or endothelial dysfunction may be stronger potentiators of CVD. This study evaluated the impact of bromocriptine-QR, a sympatholytic dopamine D2 receptor agonist, on progression of metabolic disease and CVD in T2DM subjects in good glycemic control (HbA1c ≤7.0%). Methods. 1834 subjects (1219 bromocriptine-QR; 615 placebo) with baseline HbA1c ≤7.0% derived from the Cycloset Safety Trial (this trial is registered with ClinicalTrials.gov Identifier: NCT00377676), a 12-month, randomized, multicenter, placebo-controlled, double-blind study in T2DM, were evaluated. Treatment impact upon a prespecified composite CVD endpoint (first myocardial infarction, stroke, coronary revascularization, or hospitalization for angina/congestive heart failure) and the odds of losing glycemic control (HbA1c >7.0% after 52 weeks of therapy) were determined. Results. Bromocriptine-QR reduced the CVD endpoint by 48% (intention-to-treat; HR: 0.52 [0.28−0.98]) and 52% (on-treatment analysis; HR: 0.48 [0.24−0.95]). Bromocriptine-QR also reduced the odds of both losing glycemic control (OR: 0.63 (0.47−0.85), p = 0.002) and requiring treatment intensification to maintain HbA1c ≤7.0% (OR: 0.46 (0.31−0.69), p = 0.0002). Conclusions. Bromocriptine-QR therapy slowed the progression of CVD and metabolic disease in T2DM subjects in good glycemic control. PMID:26060823
Chamarthi, Bindu; Gaziano, J Michael; Blonde, Lawrence; Vinik, Aaron; Scranton, Richard E; Ezrokhi, Michael; Rutty, Dean; Cincotta, Anthony H
2015-01-01
Type 2 diabetes (T2DM) patients, including those in good glycemic control, have an increased risk of cardiovascular disease (CVD). Maintaining good glycemic control may reduce long-term CVD risk. However, other risk factors such as elevated vascular sympathetic tone and/or endothelial dysfunction may be stronger potentiators of CVD. This study evaluated the impact of bromocriptine-QR, a sympatholytic dopamine D2 receptor agonist, on progression of metabolic disease and CVD in T2DM subjects in good glycemic control (HbA1c ≤ 7.0%). 1834 subjects (1219 bromocriptine-QR; 615 placebo) with baseline HbA1c ≤ 7.0% derived from the Cycloset Safety Trial (this trial is registered with ClinicalTrials.gov Identifier: NCT00377676), a 12-month, randomized, multicenter, placebo-controlled, double-blind study in T2DM, were evaluated. Treatment impact upon a prespecified composite CVD endpoint (first myocardial infarction, stroke, coronary revascularization, or hospitalization for angina/congestive heart failure) and the odds of losing glycemic control (HbA1c >7.0% after 52 weeks of therapy) were determined. Bromocriptine-QR reduced the CVD endpoint by 48% (intention-to-treat; HR: 0.52 [0.28-0.98]) and 52% (on-treatment analysis; HR: 0.48 [0.24-0.95]). Bromocriptine-QR also reduced the odds of both losing glycemic control (OR: 0.63 (0.47-0.85), p = 0.002) and requiring treatment intensification to maintain HbA1c ≤ 7.0% (OR: 0.46 (0.31-0.69), p = 0.0002). Bromocriptine-QR therapy slowed the progression of CVD and metabolic disease in T2DM subjects in good glycemic control.
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms.
Chen, Chunlei; He, Li; Zhang, Huixiang; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions.
NASA Technical Reports Server (NTRS)
Krosel, S. M.; Milner, E. J.
1982-01-01
The application of Predictor corrector integration algorithms developed for the digital parallel processing environment are investigated. The algorithms are implemented and evaluated through the use of a software simulator which provides an approximate representation of the parallel processing hardware. Test cases which focus on the use of the algorithms are presented and a specific application using a linear model of a turbofan engine is considered. Results are presented showing the effects of integration step size and the number of processors on simulation accuracy. Real time performance, interprocessor communication, and algorithm startup are also discussed.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
NASA Astrophysics Data System (ADS)
Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.
2017-12-01
This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.
MULTIOBJECTIVE PARALLEL GENETIC ALGORITHM FOR WASTE MINIMIZATION
In this research we have developed an efficient multiobjective parallel genetic algorithm (MOPGA) for waste minimization problems. This MOPGA integrates PGAPack (Levine, 1996) and NSGA-II (Deb, 2000) with novel modifications. PGAPack is a master-slave parallel implementation of a...
A sample implementation for parallelizing Divide-and-Conquer algorithms on the GPU.
Mei, Gang; Zhang, Jiayin; Xu, Nengxiong; Zhao, Kunyang
2018-01-01
The strategy of Divide-and-Conquer (D&C) is one of the frequently used programming patterns to design efficient algorithms in computer science, which has been parallelized on shared memory systems and distributed memory systems. Tzeng and Owens specifically developed a generic paradigm for parallelizing D&C algorithms on modern Graphics Processing Units (GPUs). In this paper, by following the generic paradigm proposed by Tzeng and Owens, we provide a new and publicly available GPU implementation of the famous D&C algorithm, QuickHull, to give a sample and guide for parallelizing D&C algorithms on the GPU. The experimental results demonstrate the practicality of our sample GPU implementation. Our research objective in this paper is to present a sample GPU implementation of a classical D&C algorithm to help interested readers to develop their own efficient GPU implementations with fewer efforts.
Data communications in a parallel active messaging interface of a parallel computer
Davis, Kristan D.; Faraj, Daniel A.
2014-07-22
Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and ranges of message sizes so that each algorithm is associated with a separate range of message sizes; receiving in an origin endpoint of the PAMI a data communications instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint, the data communications message characterized by a message size; selecting, from among the associated algorithms and ranges, a data communications algorithm in dependence upon the message size; and transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.
Data communications in a parallel active messaging interface of a parallel computer
Davis, Kristan D; Faraj, Daniel A
2013-07-09
Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and ranges of message sizes so that each algorithm is associated with a separate range of message sizes; receiving in an origin endpoint of the PAMI a data communications instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint, the data communications message characterized by a message size; selecting, from among the associated algorithms and ranges, a data communications algorithm in dependence upon the message size; and transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.
Ji, Long; Yuan, Yonglei; Ma, Zhongjun; Chen, Zhe; Gan, Lishe; Ma, Xiaoqiong; Huang, Dongsheng
2013-09-01
In the present study, it was demonstrated that the dichloromethane extract of Physalis pubescens L. (DEPP) had weak potential quinone reductase (QR) inducing activity, but an UPLC-ESI-MS method with glutathione (GSH) as the substrate revealed that the DEPP had electrophiles (with an α,β-unsaturated ketone moiety). These electrophiles could induce quinone reductase (QR) activity, which might be attributed to the modification of the highly reactive cysteine residues in Keap1. Herein, four withanolides, including three new compounds physapubescin B (2), physapubescin C (3), physapubescin D (4), together with one known steroidal compound physapubescin (1) were isolated. Structures of these compounds were determined by spectroscopic analysis and that of physapubescin C (3) was confirmed by a combination of molecular modeling and quantum chemical DFT-GIAO calculations. Evaluation of the QR inducing activities of all withanolides indicated potent activities of compounds 1 and 2, which had a common α,β-unsaturated ketone moiety. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ding, Hui; Hu, Zhijuan; Yu, Liyan; Ma, Zhongjun; Ma, Xiaoqiong; Chen, Zhe; Wang, Dan; Zhao, Xiaofeng
2014-08-01
In the present study, the EtOAc extract of the persistent calyx of Physalis angulata L. var. villosa Bonati (PA) was tested for its potential quinone reductase (QR) inducing activity with glutathione (GSH) as the substrate using an UPLC-ESI-MS method. The result revealed that the PA had electrophiles that could induce quinone reductase (QR) activity, which might be attributed to the modification of the highly reactive cysteine residues in Keap1. Herein, three new withanolides, compounds 3, 6 and 7, together with four known withanolides, compounds 1, 2, 4 and 5 were isolated from PA extract. Their structures were determined by spectroscopic techniques, including (1)H-, (13)C NMR (DEPT), and 2D-NMR (HMBC, HMQC, (1)H, (1)H-COSY, NOESY) experiments, as well as by HR-MS. All the seven compounds were tested for their QR induction activities towards mouse hepa 1c1c7 cells. Copyright © 2014 Elsevier Inc. All rights reserved.
Rekadwad, Bhagwan N.; Khobragade, Chandrahasya N.
2016-01-01
Microbiologists are routinely engaged isolation, identification and comparison of isolated bacteria for their novelty. 16S rRNA sequences of Bacillus pumilus were retrieved from NCBI repository and generated QR codes for sequences (FASTA format and full Gene Bank information). 16SrRNA were used to generate quick response (QR) codes of Bacillus pumilus isolated from Lonar Crator Lake (19° 58′ N; 76° 31′ E), India. Bacillus pumilus 16S rRNA gene sequences were used to generate CGR, FCGR and PCA. These can be used for visual comparison and evaluation respectively. The hyperlinked QR codes, CGR, FCGR and PCA of all the isolates are made available to the users on a portal https://sites.google.com/site/bhagwanrekadwad/. This generated digital data helps to evaluate and compare any Bacillus pumilus strain, minimizes laboratory efforts and avoid misinterpretation of the species. PMID:27141529
Graphical Representation of Parallel Algorithmic Processes
1990-12-01
interface with the AAARF main process . The source code for the AAARF class-common library is in the common subdi- rectory and consists of the following files... for public release; distribution unlimited AFIT/GCE/ENG/90D-07 Graphical Representation of Parallel Algorithmic Processes THESIS Presented to the...goal of this study is to develop an algorithm animation facility for parallel processes executing on different architectures, from multiprocessor
The openGL visualization of the 2D parallel FDTD algorithm
NASA Astrophysics Data System (ADS)
Walendziuk, Wojciech
2005-02-01
This paper presents a way of visualization of a two-dimensional version of a parallel algorithm of the FDTD method. The visualization module was created on the basis of the OpenGL graphic standard with the use of the GLUT interface. In addition, the work includes the results of the efficiency of the parallel algorithm in the form of speedup charts.
Industry funding and the reporting quality of large long-term weight loss trials
Thomas, Olivia; Thabane, Lehana; Douketis, James; Chu, Rong; Westfall, Andrew O.; Allison, David B.
2009-01-01
Background Quality of reporting (QR) in industry-funded research is a concern of the scientific community. Greater scrutiny of industry-sponsored research reporting has been suggested, although differences in QR by sponsorship type have not been evaluated in weight loss interventions. Objective To evaluate the association of funding source and QR of long-term obesity randomized clinical trials. Methods We analyzed papers that reported long-term weight loss trials. Articles were obtained through searches of MEDLINE, HealthStar, and the Cochrane Controlled Trials Register between the years 1966–2003. QR scores were determined for each study based upon expanded criteria from the Consolidated Standards for Reporting Trials (CONSORT) checklist for a maximum score of 44 points. Studies were coded by category of industry support (0=no industry support, 1= industry support, 2= in kind contribution from industry and 3=duality of interest reported). Individual CONSORT reporting criteria were tabulated by funding type. An independent samples t-test compared differences in QR scores by funding source and the Wilcox-Mann-Whitney test and generalized estimating equations (GEE) were used for sensitivity analyses. Results Of the 63 RCTs evaluated, 67% were industry-supported trials. Industry funding was associated with higher QR score in long-term weight loss trials compared to non-industry funded studies (Mean QR (SD): Industry = 27.9 (4.1), Non-Industry =23.4 (4.1); p < 0.0005). The Wilcox-Mann-Whitney test confirmed this result (p<0.0005). Controlling for the year of publication and whether paper was published before the CONSORT statement was released in a GEE regression analysis, the direction and magnitude of effect was similar and statistically significant (p=0.035). Of the individual criteria that prior research has associated with biases, industry funding was associated with greater reporting of intent-to-treat analysis (p=0.0158), but was not different from non-industry studies in reporting of treatment allocation and blinding. Conclusion Our findings suggest that efforts to improve reporting quality be directed at all obesity RCTs irrespective of funding source. PMID:18711388
Industry funding and the reporting quality of large long-term weight loss trials.
Thomas, O; Thabane, L; Douketis, J; Chu, R; Westfall, A O; Allison, D B
2008-10-01
Quality of reporting (QR) in industry-funded research is a concern of the scientific community. Greater scrutiny of industry-sponsored research reporting has been suggested, although differences in QR by sponsorship type have not been evaluated in weight loss interventions. To evaluate the association of funding source and QR of long-term obesity randomized clinical trials (RCT). We analysed papers that reported long-term weight loss trials. Articles were obtained through searches of Medline, HealthStar, and the Cochrane Controlled Trials Register between the years 1966 and 2003. QR scores were determined for each study based upon expanded criteria from the Consolidated Standards for Reporting Trials (CONSORT) checklist for a maximum score of 44 points. Studies were coded by category of industry support (0=no industry support, 1=industry support, 2=in kind contribution from industry and 3=duality of interest reported). Individual CONSORT reporting criteria were tabulated by funding type. An independent samples t-test compared the differences in QR scores by funding source and the Wilcox-Mann-Whitney test and generalised estimating equations (GEE) were used for sensitivity analyses. Of the 63 RCTs evaluated, 67% were industry-supported trials. Industry funding was associated with higher QR score in long-term weight loss trials compared with nonindustry-funded studies (mean QR (s.d.): industry=27.9 (4.1), nonindustry=23.4 (4.1); P<0.0005). The Wilcox-Mann-Whitney test confirmed this result (P<0.0005). Controlling for the year of publication and whether the paper was published before the CONSORT statement was released in the GEE regression analysis, the direction and magnitude of effect were similar and statistically significant (P=0.035). Of the individual criteria that prior research has associated with biases, industry funding was associated with greater reporting of intent-to-treat analysis (P=0.0158), but was not different from nonindustry studies in reporting of treatment allocation and blinding. Our findings suggest that the efforts to improve reporting quality be directed to all obesity RCTs, irrespective of funding source.
Silachev, Denis N; Isaev, Nikolay K; Pevzner, Irina B; Zorova, Ljubava D; Stelmashook, Elena V; Novikova, Svetlana V; Plotnikov, Egor Y; Skulachev, Vladimir P; Zorov, Dmitry B
2012-01-01
Many ischemia-induced neurological pathologies including stroke are associated with high oxidative stress. Mitochondria-targeted antioxidants could rescue the ischemic organ by providing specific delivery of antioxidant molecules to the mitochondrion, which potentially suffers from oxidative stress more than non-mitochondrial cellular compartments. Besides direct antioxidative activity, these compounds are believed to activate numerous protective pathways. Endogenous anti-ischemic defense may involve the very powerful neuroprotective agent erythropoietin, which is mainly produced by the kidney in a redox-dependent manner, indicating an important role of the kidney in regulation of brain ischemic damage. The goal of this study is to track the relations between the kidney and the brain in terms of the amplification of defense mechanisms during SkQR1 treatment and remote renal preconditioning and provide evidence that the kidney can generate signals inducing a tolerance to oxidative stress-associated brain pathologies. We used the cationic plastoquinone derivative, SkQR1, as a mitochondria-targeted antioxidant to alleviate the deleterious consequences of stroke. A single injection of SkQR1 before cerebral ischemia in a dose-dependent manner reduces infarction and improves functional recovery. Concomitantly, an increase in the levels of erythropoietin in urine and phosphorylated glycogen synthase kinase-3β (GSK-3β) in the brain was detected 24 h after SkQR1 injection. However, protective effects of SkQR1 were not observed in rats with bilateral nephrectomy and in those treated with the nephrotoxic antibiotic gentamicin, indicating the protective role of humoral factor(s) which are released from functional kidneys. Renal preconditioning also induced brain protection in rats accompanied by an increased erythropoietin level in urine and kidney tissue and P-GSK-3β in brain. Co-cultivation of SkQR1-treated kidney cells with cortical neurons resulted in enchanced phosphorylation of GSK-3β in neuronal cells. The results indicate that renal preconditioning and SkQR1-induced brain protection may be mediated through the release of EPO from the kidney.
Cao, Jianfang; Chen, Lichao; Wang, Min; Tian, Yun
2018-01-01
The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance.
A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor
NASA Technical Reports Server (NTRS)
Rao, Hariprasad Nannapaneni
1989-01-01
The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.
Suzuki, Yusuke; Yamada, Kohei; Watanabe, Kentaro; Kochi, Takuya; Ie, Yutaka; Aso, Yoshio; Kakiuchi, Fumitoshi
2017-07-21
A convenient method for the syntheses of dibenzo[h,rst]pentaphenes and dibenzo[fg,qr]pentacenes via the ruthenium-catalyzed chemoselective C-O arylation of 1,4- and 1,5-dimethoxyanthraquinones is described. Dimethoxyanthraquinones reacted selectively with arylboronates at the ortho C-O bonds to give diarylation products. An efficient two-step procedure consisting of a Corey-Chaykofsky reaction and subsequent dehydrative aromatization afforded derivatives of dibenzo[h,rst]pentaphenes and dibenzo[fg,qr]pentacenes. Hole-transporting characteristics were observed for a device with a bottom-contact configuration that was fabricated from one of these polycyclic aromatic hydrocarbons.
Performance of low-rank QR approximation of the finite element Biot-Savart law
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, D A; Fasenfest, B J
2006-01-12
We are concerned with the computation of magnetic fields from known electric currents in the finite element setting. In finite element eddy current simulations it is necessary to prescribe the magnetic field (or potential, depending upon the formulation) on the conductor boundary. In situations where the magnetic field is due to a distributed current density, the Biot-Savart law can be used, eliminating the need to mesh the nonconducting regions. Computation of the Biot-Savart law can be significantly accelerated using a low-rank QR approximation. We review the low-rank QR method and report performance on selected problems.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David
2006-05-01
The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.
ERIC Educational Resources Information Center
Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien
2013-01-01
This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Vinik, Aaron I; Cincotta, Anthony H; Scranton, Richard E; Bohannon, Nancy; Ezrokhi, Michael; Gaziano, J Michael
2012-01-01
To investigate the effect of Bromocriptine-QR on glycemic control in patients with type 2 diabetes whose glycemia is poorly controlled on one or two oral anti-diabetes agents. Five hundred fifteen Type 2 Diabetes Mellitus (T2DM) subjects (ages 18 to 80 and average body mass index [BMI] of 32.7) with baseline HbA1c ≥ 7.5 and on one or two oral anti-diabetes (OAD) medications (metformin, sulfonylurea, and/or thiazolidinediones) were randomized 2:1 to bromocriptine-QR (1.6 to 4.8 mg/day) or placebo for a 24 week treatment period. Study investigators were allowed to adjust, if necessary, subject anti-diabetes medications during the study to attempt to achieve glycemic control in case of glycemic deterioration. The impact of bromocriptine-QR treatment intervention on glycemic control was assessed in subjects on any one or two OADs (ALL treatment category) (N = 515), or on metformin with or without another OAD (Met/OAD treatment category) (N = 356), or on metformin plus a sulfonylurea (Met/SU treatment category) (N = 245) 1) by examining the between group difference in change from baseline a) concomitant OAD medication changes during the study, and b) HbA1c and 2) by determining the odds of reaching HbA1c of ≤ 7.0% on bromocriptine-QR versus placebo. Significantly more patients (approximately 1.5 to 2-fold more; P<.05) intensified concomitant anti-diabetes medication therapy during the study in the placebo versus the bromocriptine-QR arm. In subjects that did not change the intensity of the baseline diabetes therapy (72%), and that were on any one or two OADs (ALL), or on metformin with or without another OAD (Met/OAD), or on metformin plus sulfonylurea (Met/SU), the HbA1c change for bromocriptine-QR versus placebo was -0.47 versus +0.22 (between group delta of -0.69, P<.0001), -0.55 versus +0.26 (between group delta of -0.81, P<.0001) and -0.63 versus +0.20 (between group delta of -0.83, P<.0001) respectively, after 24 weeks on therapy. The odds ratio of reaching HbA1c of ≤ 7.0% was 6.50, 12.03 and 11.45 (P<.0002) for these three groups, respectively. In T2DM subjects whose hyperglycemia is poorly controlled on one or two oral agents, bromocriptine-QR therapy for 24 weeks can provide significant added improvement in glycemic control relative to adding placebo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chrisochoides, N.; Sukup, F.
In this paper we present a parallel implementation of the Bowyer-Watson (BW) algorithm using the task-parallel programming model. The BW algorithm constitutes an ideal mesh refinement strategy for implementing a large class of unstructured mesh generation techniques on both sequential and parallel computers, by preventing the need for global mesh refinement. Its implementation on distributed memory multicomputes using the traditional data-parallel model has been proven very inefficient due to excessive synchronization needed among processors. In this paper we demonstrate that with the task-parallel model we can tolerate synchronization costs inherent to data-parallel methods by exploring concurrency in the processor level.more » Our preliminary performance data indicate that the task- parallel approach: (i) is almost four times faster than the existing data-parallel methods, (ii) scales linearly, and (iii) introduces minimum overheads compared to the {open_quotes}best{close_quotes} sequential implementation of the BW algorithm.« less
Plotnikov, E Y; Chupyrkina, A A; Jankauskas, S S; Pevzner, I B; Silachev, D N; Skulachev, V P; Zorov, D B
2011-01-01
Oxidative stress-related renal pathologies apparently include rhabdomyolysis and ischemia/reperfusion phenomenon. These two pathologies were chosen for study in order to develop a proper strategy for protection of the kidney. Mitochondria were found to be a key player in these pathologies, being both the source and the target for excessive production of reactive oxygen species (ROS). A mitochondria-targeted compound which is a conjugate of a positively charged rhodamine molecule with plastoquinone (SkQR1) was found to rescue the kidney from the deleterious effect of both pathologies. Intraperitoneal injection of SkQR1 before the onset of pathology not only normalized the level of ROS and lipid peroxidized products in kidney mitochondria but also decreased the level of cytochrome c in the blood, restored normal renal excretory function and significantly lowered mortality among animals having a single kidney exposed to ischemia/reperfusion. The SkQR1-derivative missing plastoquinone (C12R1) possessed some, although limited nephroprotective properties and enhanced animal survival after ischemia/reperfusion. SkQR1 was found to induce some elements of nephroprotective pathways providing ischemic tolerance such as an increase in erythropoietin levels and phosphorylation of glycogen synthase kinase 3β in the kidney. SkQR1 also normalized renal erythropoietin level lowered after kidney ischemia/reperfusion and injection of a well-known nephrotoxic agent gentamicin. Copyright © 2010 Elsevier B.V. All rights reserved.
Computational mechanics analysis tools for parallel-vector supercomputers
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning
1993-01-01
Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.
A unifying framework for rigid multibody dynamics and serial and parallel computational issues
NASA Technical Reports Server (NTRS)
Fijany, Amir; Jain, Abhinandan
1989-01-01
A unifying framework for various formulations of the dynamics of open-chain rigid multibody systems is discussed. Their suitability for serial and parallel processing is assessed. The framework is based on the derivation of intrinsic, i.e., coordinate-free, equations of the algorithms which provides a suitable abstraction and permits a distinction to be made between the computational redundancy in the intrinsic and extrinsic equations. A set of spatial notation is used which allows the derivation of the various algorithms in a common setting and thus clarifies the relationships among them. The three classes of algorithms viz., O(n), O(n exp 2) and O(n exp 3) or the solution of the dynamics problem are investigated. Researchers begin with the derivation of O(n exp 3) algorithms based on the explicit computation of the mass matrix and it provides insight into the underlying basis of the O(n) algorithms. From a computational perspective, the optimal choice of a coordinate frame for the projection of the intrinsic equations is discussed and the serial computational complexity of the different algorithms is evaluated. The three classes of algorithms are also analyzed for suitability for parallel processing. It is shown that the problem belongs to the class of N C and the time and processor bounds are of O(log2/2(n)) and O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 4), respectively. However, the algorithm that achieves the above bounds is not stable. Researchers show that the fastest stable parallel algorithm achieves a computational complexity of O(n) with O(n exp 2) processors, and results from the parallelization of the O(n exp 3) serial algorithm.
NASA Astrophysics Data System (ADS)
Wang, Yue; Yu, Jingjun; Pei, Xu
2018-06-01
A new forward kinematics algorithm for the mechanism of 3-RPS (R: Revolute; P: Prismatic; S: Spherical) parallel manipulators is proposed in this study. This algorithm is primarily based on the special geometric conditions of the 3-RPS parallel mechanism, and it eliminates the errors produced by parasitic motions to improve and ensure accuracy. Specifically, the errors can be less than 10-6. In this method, only the group of solutions that is consistent with the actual situation of the platform is obtained rapidly. This algorithm substantially improves calculation efficiency because the selected initial values are reasonable, and all the formulas in the calculation are analytical. This novel forward kinematics algorithm is well suited for real-time and high-precision control of the 3-RPS parallel mechanism.
Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.
Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal
2010-11-15
Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.
NASA Astrophysics Data System (ADS)
Akil, Mohamed
2017-05-01
The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.
Parallel solution of sparse one-dimensional dynamic programming problems
NASA Technical Reports Server (NTRS)
Nicol, David M.
1989-01-01
Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.
Soft-output decoding algorithms in iterative decoding of turbo codes
NASA Technical Reports Server (NTRS)
Benedetto, S.; Montorsi, G.; Divsalar, D.; Pollara, F.
1996-01-01
In this article, we present two versions of a simplified maximum a posteriori decoding algorithm. The algorithms work in a sliding window form, like the Viterbi algorithm, and can thus be used to decode continuously transmitted sequences obtained by parallel concatenated codes, without requiring code trellis termination. A heuristic explanation is also given of how to embed the maximum a posteriori algorithms into the iterative decoding of parallel concatenated codes (turbo codes). The performances of the two algorithms are compared on the basis of a powerful rate 1/3 parallel concatenated code. Basic circuits to implement the simplified a posteriori decoding algorithm using lookup tables, and two further approximations (linear and threshold), with a very small penalty, to eliminate the need for lookup tables are proposed.
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
On the Accuracy and Parallelism of GPGPU-Powered Incremental Clustering Algorithms
He, Li; Zheng, Hao; Wang, Lei
2017-01-01
Incremental clustering algorithms play a vital role in various applications such as massive data analysis and real-time data processing. Typical application scenarios of incremental clustering raise high demand on computing power of the hardware platform. Parallel computing is a common solution to meet this demand. Moreover, General Purpose Graphic Processing Unit (GPGPU) is a promising parallel computing device. Nevertheless, the incremental clustering algorithm is facing a dilemma between clustering accuracy and parallelism when they are powered by GPGPU. We formally analyzed the cause of this dilemma. First, we formalized concepts relevant to incremental clustering like evolving granularity. Second, we formally proved two theorems. The first theorem proves the relation between clustering accuracy and evolving granularity. Additionally, this theorem analyzes the upper and lower bounds of different-to-same mis-affiliation. Fewer occurrences of such mis-affiliation mean higher accuracy. The second theorem reveals the relation between parallelism and evolving granularity. Smaller work-depth means superior parallelism. Through the proofs, we conclude that accuracy of an incremental clustering algorithm is negatively related to evolving granularity while parallelism is positively related to the granularity. Thus the contradictory relations cause the dilemma. Finally, we validated the relations through a demo algorithm. Experiment results verified theoretical conclusions. PMID:29123546
Parallelization strategies for continuum-generalized method of moments on the multi-thread systems
NASA Astrophysics Data System (ADS)
Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.
2017-07-01
Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.
Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains.
Jha, Ashwani; Flurchick, K M; Bikdash, Marwan; Kc, Dukka B
2016-01-01
Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10-15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors.
Parallel-SymD: A Parallel Approach to Detect Internal Symmetry in Protein Domains
Jha, Ashwani; Flurchick, K. M.; Bikdash, Marwan
2016-01-01
Internally symmetric proteins are proteins that have a symmetrical structure in their monomeric single-chain form. Around 10–15% of the protein domains can be regarded as having some sort of internal symmetry. In this regard, we previously published SymD (symmetry detection), an algorithm that determines whether a given protein structure has internal symmetry by attempting to align the protein to its own copy after the copy is circularly permuted by all possible numbers of residues. SymD has proven to be a useful algorithm to detect symmetry. In this paper, we present a new parallelized algorithm called Parallel-SymD for detecting symmetry of proteins on clusters of computers. The achieved speedup of the new Parallel-SymD algorithm scales well with the number of computing processors. Scaling is better for proteins with a larger number of residues. For a protein of 509 residues, a speedup of 63 was achieved on a parallel system with 100 processors. PMID:27747230
Evolutionary profiles from the QR factorization of multiple sequence alignments
Sethi, Anurag; O'Donoghue, Patrick; Luthey-Schulten, Zaida
2005-01-01
We present an algorithm to generate complete evolutionary profiles that represent the topology of the molecular phylogenetic tree of the homologous group. The method, based on the multidimensional QR factorization of numerically encoded multiple sequence alignments, removes redundancy from the alignments and orders the protein sequences by increasing linear dependence, resulting in the identification of a minimal basis set of sequences that spans the evolutionary space of the homologous group of proteins. We observe a general trend that these smaller, more evolutionarily balanced profiles have comparable and, in many cases, better performance in database searches than conventional profiles containing hundreds of sequences, constructed in an iterative and computationally intensive procedure. For more diverse families or superfamilies, with sequence identity <30%, structural alignments, based purely on the geometry of the protein structures, provide better alignments than pure sequence-based methods. Merging the structure and sequence information allows the construction of accurate profiles for distantly related groups. These structure-based profiles outperformed other sequence-based methods for finding distant homologs and were used to identify a putative class II cysteinyl-tRNA synthetase (CysRS) in several archaea that eluded previous annotation studies. Phylogenetic analysis showed the putative class II CysRSs to be a monophyletic group and homology modeling revealed a constellation of active site residues similar to that in the known class I CysRS. PMID:15741270
A Fast parallel tridiagonal algorithm for a class of CFD applications
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Sun, Xian-He
1996-01-01
The parallel diagonal dominant (PDD) algorithm is an efficient tridiagonal solver. This paper presents for study a variation of the PDD algorithm, the reduced PDD algorithm. The new algorithm maintains the minimum communication provided by the PDD algorithm, but has a reduced operation count. The PDD algorithm also has a smaller operation count than the conventional sequential algorithm for many applications. Accuracy analysis is provided for the reduced PDD algorithm for symmetric Toeplitz tridiagonal (STT) systems. Implementation results on Langley's Intel Paragon and IBM SP2 show that both the PDD and reduced PDD algorithms are efficient and scalable.
Parallel language constructs for tensor product computations on loosely coupled architectures
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Vanrosendale, John
1989-01-01
Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.
Parallel processing in finite element structural analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1987-01-01
A brief review is made of the fundamental concepts and basic issues of parallel processing. Discussion focuses on parallel numerical algorithms, performance evaluation of machines and algorithms, and parallelism in finite element computations. A computational strategy is proposed for maximizing the degree of parallelism at different levels of the finite element analysis process including: 1) formulation level (through the use of mixed finite element models); 2) analysis level (through additive decomposition of the different arrays in the governing equations into the contributions to a symmetrized response plus correction terms); 3) numerical algorithm level (through the use of operator splitting techniques and application of iterative processes); and 4) implementation level (through the effective combination of vectorization, multitasking and microtasking, whenever available).
Parallel Algorithms for the Exascale Era
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robey, Robert W.
New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this workmore » has been done by undergraduates and published in leading scientific journals.« less
NASA Astrophysics Data System (ADS)
Roche-Lima, Abiel; Thulasiram, Ruppa K.
2012-02-01
Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
The remote sensing image segmentation mean shift algorithm parallel processing based on MapReduce
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhou, Liqing
2015-12-01
With the development of satellite remote sensing technology and the remote sensing image data, traditional remote sensing image segmentation technology cannot meet the massive remote sensing image processing and storage requirements. This article put cloud computing and parallel computing technology in remote sensing image segmentation process, and build a cheap and efficient computer cluster system that uses parallel processing to achieve MeanShift algorithm of remote sensing image segmentation based on the MapReduce model, not only to ensure the quality of remote sensing image segmentation, improved split speed, and better meet the real-time requirements. The remote sensing image segmentation MeanShift algorithm parallel processing algorithm based on MapReduce shows certain significance and a realization of value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Moitra, Stuti
1996-01-01
Various tridiagonal solvers have been proposed in recent years for different parallel platforms. In this paper, the performance of three tridiagonal solvers, namely, the parallel partition LU algorithm, the parallel diagonal dominant algorithm, and the reduced diagonal dominant algorithm, is studied. These algorithms are designed for distributed-memory machines and are tested on an Intel Paragon and an IBM SP2 machines. Measured results are reported in terms of execution time and speedup. Analytical study are conducted for different communication topologies and for different tridiagonal systems. The measured results match the analytical results closely. In addition to address implementation issues, performance considerations such as problem sizes and models of speedup are also discussed.
Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo
NASA Astrophysics Data System (ADS)
Khosravi, Ebrahim
1998-12-01
This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.
Wang, Min; Tian, Yun
2018-01-01
The Canny operator is widely used to detect edges in images. However, as the size of the image dataset increases, the edge detection performance of the Canny operator decreases and its runtime becomes excessive. To improve the runtime and edge detection performance of the Canny operator, in this paper, we propose a parallel design and implementation for an Otsu-optimized Canny operator using a MapReduce parallel programming model that runs on the Hadoop platform. The Otsu algorithm is used to optimize the Canny operator's dual threshold and improve the edge detection performance, while the MapReduce parallel programming model facilitates parallel processing for the Canny operator to solve the processing speed and communication cost problems that occur when the Canny edge detection algorithm is applied to big data. For the experiments, we constructed datasets of different scales from the Pascal VOC2012 image database. The proposed parallel Otsu-Canny edge detection algorithm performs better than other traditional edge detection algorithms. The parallel approach reduced the running time by approximately 67.2% on a Hadoop cluster architecture consisting of 5 nodes with a dataset of 60,000 images. Overall, our approach system speeds up the system by approximately 3.4 times when processing large-scale datasets, which demonstrates the obvious superiority of our method. The proposed algorithm in this study demonstrates both better edge detection performance and improved time performance. PMID:29861711
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
GPU-based parallel algorithm for blind image restoration using midfrequency-based methods
NASA Astrophysics Data System (ADS)
Xie, Lang; Luo, Yi-han; Bao, Qi-liang
2013-08-01
GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.
Synchronization Of Parallel Discrete Event Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S.
1992-01-01
Adaptive, parallel, discrete-event-simulation-synchronization algorithm, Breathing Time Buckets, developed in Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) operating system. Algorithm allows parallel simulations to process events optimistically in fluctuating time cycles that naturally adapt while simulation in progress. Combines best of optimistic and conservative synchronization strategies while avoiding major disadvantages. Algorithm processes events optimistically in time cycles adapting while simulation in progress. Well suited for modeling communication networks, for large-scale war games, for simulated flights of aircraft, for simulations of computer equipment, for mathematical modeling, for interactive engineering simulations, and for depictions of flows of information.
Characterization of robotics parallel algorithms and mapping onto a reconfigurable SIMD machine
NASA Technical Reports Server (NTRS)
Lee, C. S. G.; Lin, C. T.
1989-01-01
The kinematics, dynamics, Jacobian, and their corresponding inverse computations are six essential problems in the control of robot manipulators. Efficient parallel algorithms for these computations are discussed and analyzed. Their characteristics are identified and a scheme on the mapping of these algorithms to a reconfigurable parallel architecture is presented. Based on the characteristics including type of parallelism, degree of parallelism, uniformity of the operations, fundamental operations, data dependencies, and communication requirement, it is shown that most of the algorithms for robotic computations possess highly regular properties and some common structures, especially the linear recursive structure. Moreover, they are well-suited to be implemented on a single-instruction-stream multiple-data-stream (SIMD) computer with reconfigurable interconnection network. The model of a reconfigurable dual network SIMD machine with internal direct feedback is introduced. A systematic procedure internal direct feedback is introduced. A systematic procedure to map these computations to the proposed machine is presented. A new scheduling problem for SIMD machines is investigated and a heuristic algorithm, called neighborhood scheduling, that reorders the processing sequence of subtasks to reduce the communication time is described. Mapping results of a benchmark algorithm are illustrated and discussed.
Efficient sequential and parallel algorithms for record linkage.
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.
New Parallel Algorithms for Landscape Evolution Model
NASA Astrophysics Data System (ADS)
Jin, Y.; Zhang, H.; Shi, Y.
2017-12-01
Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.
Parallel digital forensics infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, Lorie M.; Duggan, David Patrick
2009-10-01
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less
Enhancing PC Cluster-Based Parallel Branch-and-Bound Algorithms for the Graph Coloring Problem
NASA Astrophysics Data System (ADS)
Taoka, Satoshi; Takafuji, Daisuke; Watanabe, Toshimasa
A branch-and-bound algorithm (BB for short) is the most general technique to deal with various combinatorial optimization problems. Even if it is used, computation time is likely to increase exponentially. So we consider its parallelization to reduce it. It has been reported that the computation time of a parallel BB heavily depends upon node-variable selection strategies. And, in case of a parallel BB, it is also necessary to prevent increase in communication time. So, it is important to pay attention to how many and what kind of nodes are to be transferred (called sending-node selection strategy). In this paper, for the graph coloring problem, we propose some sending-node selection strategies for a parallel BB algorithm by adopting MPI for parallelization and experimentally evaluate how these strategies affect computation time of a parallel BB on a PC cluster network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and bit masks; receiving in an origin endpoint of the PAMI a collective instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint; constructing a bit mask for the received collective instruction; selecting, from among the associated algorithms and bit masks,more » a data communications algorithm in dependence upon the constructed bit mask; and executing the collective instruction, transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.« less
Faraj, Daniel A
2013-07-16
Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and bit masks; receiving in an origin endpoint of the PAMI a collective instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint; constructing a bit mask for the received collective instruction; selecting, from among the associated algorithms and bit masks, a data communications algorithm in dependence upon the constructed bit mask; and executing the collective instruction, transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.
Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica
2017-06-29
The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store operations. For this reason, the island model is more suitable for PGAs than the global and grid model, also in terms of costs when executed on a commercial cloud provider.
What Are Those Checkerboard Things?: How QR Codes Can Enrich Student Projects
ERIC Educational Resources Information Center
Tucker, Al
2011-01-01
Students enrolled in commercial arts program design and publish their school's yearbook. For the 2010-2011 school year, the students applied Quick Response (QR) code technology to include links to events that occurred after the yearbook's print deadline, including graduation. The technology has many applications in the school setting, and the…
Qualitative Reasoning methods for CELSS modeling.
Guerrin, F; Bousson, K; Steyer JPh; Trave-Massuyes, L
1994-11-01
Qualitative Reasoning (QR) is a branch of Artificial Intelligence that arose from research on engineering problem solving. This paper describes the major QR methods and techniques, which, we believe, are capable of addressing some of the problems that are emphasized in the literature and posed by CELSS modeling, simulation, and control at the supervisory level.
Using QR Codes to Differentiate Learning for Gifted and Talented Students
ERIC Educational Resources Information Center
Siegle, Del
2015-01-01
QR codes are two-dimensional square patterns that are capable of coding information that ranges from web addresses to links to YouTube video. The codes save time typing and eliminate errors in entering addresses incorrectly. These codes make learning with technology easier for students and motivationally engage them in news ways.
QR Codes in the Library: "It's Not Your Mother's Barcode!"
ERIC Educational Resources Information Center
Dobbs, Cheri
2011-01-01
Barcode scanning has become more than just fun. Now libraries and businesses are leveraging barcode technology as an innovative tool to market their products and ideas. Developed and popularized in Japan, these Quick Response (QR) or two-dimensional barcodes allow marketers to provide interactive content in an otherwise static environment. In this…
The Potential for Teaching Quantitative Reasoning across the Curriculum: Empirical Evidence
ERIC Educational Resources Information Center
Grawe, Nathan D.
2011-01-01
Educational theorists have argued that effective instruction in quantitative reasoning (QR) should extend across the curriculum. While a noble goal, it is not immediately evident that this is even possible. To assess the feasibility of this approach to QR instruction, I examine papers written by undergraduates for submission to a sophomore writing…
QR in Child Grammar: Evidence from Antecedent-Contained Deletion
ERIC Educational Resources Information Center
Syrett, Kristen; Lidz, Jeffrey
2009-01-01
We show that 4-year-olds assign the correct interpretation to antecedent-contained deletion (ACD) sentences because they have the correct representation of these structures. This representation involves Quantifier Raising (QR) of a Quantificational Noun Phrase (QNP) that must move out of the site of the verb phrase in which it is contained to…
Principles of Quantile Regression and an Application
ERIC Educational Resources Information Center
Chen, Fang; Chalhoub-Deville, Micheline
2014-01-01
Newer statistical procedures are typically introduced to help address the limitations of those already in practice or to deal with emerging research needs. Quantile regression (QR) is introduced in this paper as a relatively new methodology, which is intended to overcome some of the limitations of least squares mean regression (LMR). QR is more…
Update on Development of Mesh Generation Algorithms in MeshKit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Rajeev; Vanderzee, Evan; Mahadevan, Vijay
2015-09-30
MeshKit uses a graph-based design for coding all its meshing algorithms, which includes the Reactor Geometry (and mesh) Generation (RGG) algorithms. This report highlights the developmental updates of all the algorithms, results and future work. Parallel versions of algorithms, documentation and performance results are reported. RGG GUI design was updated to incorporate new features requested by the users; boundary layer generation and parallel RGG support were added to the GUI. Key contributions to the release, upgrade and maintenance of other SIGMA1 libraries (CGM and MOAB) were made. Several fundamental meshing algorithms for creating a robust parallel meshing pipeline in MeshKitmore » are under development. Results and current status of automated, open-source and high quality nuclear reactor assembly mesh generation algorithms such as trimesher, quadmesher, interval matching and multi-sweeper are reported.« less
Fluorescent Nanocrystals Reveal Regulated Portals of Entry into and Between the Cells of Hydra
Tortiglione, Claudia; Quarta, Alessandra; Malvindi, Maria Ada; Tino, Angela; Pellegrino, Teresa
2009-01-01
Initially viewed as innovative carriers for biomedical applications, with unique photophysical properties and great versatility to be decorated at their surface with suitable molecules, nanoparticles can also play active roles in mediating biological effects, suggesting the need to deeply investigate the mechanisms underlying cell-nanoparticle interaction and to identify the molecular players. Here we show that the cell uptake of fluorescent CdSe/CdS quantum rods (QRs) by Hydra vulgaris, a simple model organism at the base of metazoan evolution, can be tuned by modifying nanoparticle surface charge. At acidic pH, amino-PEG coated QRs, showing positive surface charge, are actively internalized by tentacle and body ectodermal cells, while negatively charged nanoparticles are not uptaken. In order to identify the molecular factors underlying QR uptake at acidic pH, we provide functional evidence of annexins involvement and explain the QR uptake as the combined result of QR positive charge and annexin membrane insertion. Moreover, tracking QR labelled cells during development and regeneration allowed us to uncover novel intercellular trafficking and cell dynamics underlying the remarkable plasticity of this ancient organism. PMID:19888325
Yuan, Mingquan; Jiang, Qisheng; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu
2018-06-01
This paper addresses two key challenges toward an integrated forward error-correcting biosensor based on our previously reported self-assembled quick-response (QR) code. The first challenge involves the choice of the paper substrate for printing and self-assembling the QR code. We have compared four different substrates that includes regular printing paper, Whatman filter paper, nitrocellulose membrane and lab synthesized bacterial cellulose. We report that out of the four substrates bacterial cellulose outperforms the others in terms of probe (gold nanorods) and ink retention capability. The second challenge involves remote activation of the analyte sampling and the QR code self-assembly process. In this paper, we use light as a trigger signal and a graphite layer as a light-absorbing material. The resulting change in temperature due to infrared absorption leads to a temperature gradient that then exerts a diffusive force driving the analyte toward the regions of self-assembly. The working principle has been verified in this paper using assembled biosensor prototypes where we demonstrate higher sample flow rate due to light induced thermal gradients.
Yuan, Mingquan; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu
2016-10-01
This paper extends our previous work on silver-enhancement based self-assembling structures for designing reliable, self-powered biosensors with forward error correcting (FEC) capability. At the core of the proposed approach is the integration of paper-based microfluidics with quick response (QR) codes that can be optically scanned using a smart-phone. The scanned information is first decoded to obtain the location of a web-server which further processes the self-assembled QR image to determine the concentration of target analytes. The integration substrate for the proposed FEC biosensor is polyethylene and the patterning of the QR code on the substrate has been achieved using a combination of low-cost ink-jet printing and a regular ballpoint dispensing pen. A paper-based microfluidics channel has been integrated underneath the substrate for acquiring, mixing and flowing the sample to areas on the substrate where different parts of the code can self-assemble in presence of immobilized gold nanorods. In this paper we demonstrate the proof-of-concept detection using prototypes of QR encoded FEC biosensors.
Effect of drying temperatures on starch-related functional and thermal properties of acorn flours.
Correia, P R; Beirão-da-Costa, M L
2011-03-01
The application of starchy flours from different origins in food systems depends greatly on information about the chemical and functional properties of such food materials. Acorns are important forestry resources in the central and southern regions of Portugal. To preserve these fruits and to optimize their use, techniques like drying are needed. The effects of different drying temperatures on starch-related functional properties of acorn flours obtained from dried fruits of Quercus rotundifolia (QR) and Quercus suber (QS) were evaluated. Flours were characterized for amylose and resistant starch (RS) contents, swelling ability, and gelatinization properties. Drying temperature mainly affected amylose content and viscoamylographic properties. Amylograms of flours from fruits dried at 60 °C displayed higher consistency (2102 B.U. and 1560 B.U., respectively, for QR and QS). The transition temperatures and enthalpy were less affected by drying temperature, suggesting few modifications in starch structure during drying. QR flours presented different functional properties to those obtained from QS acorn flours. The effect of drying temperatures were more evident in QR.
Drug-laden 3D biodegradable label using QR code for anti-counterfeiting of drugs.
Fei, Jie; Liu, Ran
2016-06-01
Wiping out counterfeit drugs is a great task for public health care around the world. The boost of these drugs makes treatment to become potentially harmful or even lethal. In this paper, biodegradable drug-laden QR code label for anti-counterfeiting of drugs is proposed that can provide the non-fluorescence recognition and high capacity. It is fabricated by the laser cutting to achieve the roughness over different surface which causes the difference in the gray levels on the translucent material the QR code pattern, and the micro mold process to obtain the drug-laden biodegradable label. We screened biomaterials presenting the relevant conditions and further requirements of the package. The drug-laden microlabel is on the surface of the troches or the bottom of the capsule and can be read by a simple smartphone QR code reader application. Labeling the pill directly and decoding the information successfully means more convenient and simple operation with non-fluorescence and high capacity in contrast to the traditional methods. Copyright © 2016 Elsevier B.V. All rights reserved.
Parallel algorithms for computation of the manipulator inertia matrix
NASA Technical Reports Server (NTRS)
Amin-Javaheri, Masoud; Orin, David E.
1989-01-01
The development of an O(log2N) parallel algorithm for the manipulator inertia matrix is presented. It is based on the most efficient serial algorithm which uses the composite rigid body method. Recursive doubling is used to reformulate the linear recurrence equations which are required to compute the diagonal elements of the matrix. It results in O(log2N) levels of computation. Computation of the off-diagonal elements involves N linear recurrences of varying-size and a new method, which avoids redundant computation of position and orientation transforms for the manipulator, is developed. The O(log2N) algorithm is presented in both equation and graphic forms which clearly show the parallelism inherent in the algorithm.
On the impact of communication complexity in the design of parallel numerical algorithms
NASA Technical Reports Server (NTRS)
Gannon, D.; Vanrosendale, J.
1984-01-01
This paper describes two models of the cost of data movement in parallel numerical algorithms. One model is a generalization of an approach due to Hockney, and is suitable for shared memory multiprocessors where each processor has vector capabilities. The other model is applicable to highly parallel nonshared memory MIMD systems. In the second model, algorithm performance is characterized in terms of the communication network design. Techniques used in VLSI complexity theory are also brought in, and algorithm independent upper bounds on system performance are derived for several problems that are important to scientific computation.
An efficient parallel algorithm for the solution of a tridiagonal linear system of equations
NASA Technical Reports Server (NTRS)
Stone, H. S.
1971-01-01
Tridiagonal linear systems of equations are solved on conventional serial machines in a time proportional to N, where N is the number of equations. The conventional algorithms do not lend themselves directly to parallel computations on computers of the ILLIAC IV class, in the sense that they appear to be inherently serial. An efficient parallel algorithm is presented in which computation time grows as log sub 2 N. The algorithm is based on recursive doubling solutions of linear recurrence relations, and can be used to solve recurrence relations of all orders.
Massively Parallel Solution of Poisson Equation on Coarse Grain MIMD Architectures
NASA Technical Reports Server (NTRS)
Fijany, A.; Weinberger, D.; Roosta, R.; Gulati, S.
1998-01-01
In this paper a new algorithm, designated as Fast Invariant Imbedding algorithm, for solution of Poisson equation on vector and massively parallel MIMD architectures is presented. This algorithm achieves the same optimal computational efficiency as other Fast Poisson solvers while offering a much better structure for vector and parallel implementation. Our implementation on the Intel Delta and Paragon shows that a speedup of over two orders of magnitude can be achieved even for moderate size problems.
Just in time? Using QR codes for multi-professional learning in clinical practice.
Jamu, Joseph Tawanda; Lowi-Jones, Hannah; Mitchell, Colin
2016-07-01
Clinical guidelines and policies are widely available on the hospital intranet or from the internet, but can be difficult to access at the required time and place. Clinical staff with smartphones could use Quick Response (QR) codes for contemporaneous access to relevant information to support the Just in Time Learning (JIT-L) paradigm. There are several studies that advocate the use of smartphones to enhance learning amongst medical students and junior doctors in UK. However, these participants are already technologically orientated. There are limited studies that explore the use of smartphones in nursing practice. QR Codes were generated for each topic and positioned at relevant locations on a medical ward. Support and training were provided for staff. Website analytics and semi-structured interviews were performed to evaluate the efficacy, acceptability and feasibility of using QR codes to facilitate Just in Time learning. Use was intermittently high but not sustained. Thematic analysis of interviews revealed a positive assessment of the Just in Time learning paradigm and context-sensitive clinical information. However, there were notable barriers to acceptance, including usability of QR codes and appropriateness of smartphone use in a clinical environment. The use of Just in Time learning for education and reference may be beneficial to healthcare professionals. However, alternative methods of access for less technologically literate users and a change in culture of mobile device use in clinical areas may be needed. Copyright © 2016 Elsevier Ltd. All rights reserved.
An embedded barcode for "connected" malaria rapid diagnostic tests.
Scherr, Thomas F; Gupta, Sparsh; Wright, David W; Haselton, Frederick R
2017-03-29
Many countries are shifting their efforts from malaria control to disease elimination. New technologies will be necessary to meet the more stringent demands of elimination campaigns, including improved quality control of malaria diagnostic tests, as well as an improved means for communicating test results among field healthcare workers, test manufacturers, and national ministries of health. In this report, we describe and evaluate an embedded barcode within standard rapid diagnostic tests as one potential solution. This information-augmented diagnostic test operates on the familiar principles of traditional lateral flow assays and simply replaces the control line with a control grid patterned in the shape of a QR (quick response) code. After the test is processed, the QR code appears on both positive or negative tests. In this report we demonstrate how this multipurpose code can be used not only to fulfill the control line role of test validation, but also to embed test manufacturing details, serve as a trigger for image capture, enable registration for image analysis, and correct for lighting effects. An accompanying mobile phone application automatically captures an image of the test when the QR code is recognized, decodes the QR code, performs image processing to determine the concentration of the malarial biomarker histidine-rich protein 2 at the test line, and transmits the test results and QR code payload to a secure web portal. This approach blends automated, sub-nanomolar biomarker detection, with near real-time reporting to provide quality assurance data that will help to achieve malaria elimination.
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Empirical study of parallel LRU simulation algorithms
NASA Technical Reports Server (NTRS)
Carr, Eric; Nicol, David M.
1994-01-01
This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
Gorini, Giuseppe; Ameglio, Matteo; Martini, Andrea; Bosi, Sandra; Laezza, Maurizio
2013-01-01
to evaluate differences in terms of smokers' attendance to National Health System (NHS) Stop-Smoking Services with a prevalent individual approach (SSSi), and to those with a prevalent group approach (SSSg). To identify predictive characteristics of success, in terms of quit rates at the end of treatment (QR0) and after 6 months (QR1), according to SSS type (SSSi/SSSg), treatment (individual/ group counseling with/without pharmacologic treatments), 5 SSS scores: type of structure (S), number and hours per week of SSS health professionals (P), SSS involvement in local tobacco control networks (N), and type of smokers' assessment (A); and 3 principal components of SSS characteristics. survey to 19 SSSs, and survey to smokers attending these SSSs, with a six month follow-up. 1,276 smokers attending 19 SSSs (664 at 7 SSSi; 612 at 12 SSSg) in 9 months in the period 2008-2010. smokers' attendance to scheduled sessions; QR0; QR1. even though SSSi treated more smokers per month (12 vs. 8 in SSSg), SSSi scheduled fewer treatment sessions (7 vs. 9 sessions) in a wider treatment period (3 months vs. 2 in SSSg). SSSg recorded lower P and higher A scores. Four out of 5 smokers attending SSSg and 2/5 of smokers attending SSSi completed treatment protocols. Considering all smokers, QR1 in both types of SSS were around 36%. Smokers treated with pharmacotherapy, those more motivated and with high self-efficacy, and those non-living together with smokers were more likely to recorded higher QR1. the most relevant interventions in order to increase the number of smokers treated at SSS and to improve cessation rates among them were: for SSSi, increasing completion to treatment protocol; for SSSg, improving the P scores to increase the number of treated smokers; for all SSS, increasing the use of pharmacotherapy in combination with individual/group counseling to sustain abstinence.
Minkoff, Benjamin B.; Stecker, Kelly E.; Sussman, Michael R.
2015-01-01
Abscisic acid (ABA)1 is a plant hormone that controls many aspects of plant growth, including seed germination, stomatal aperture size, and cellular drought response. ABA interacts with a unique family of 14 receptor proteins. This interaction leads to the activation of a family of protein kinases, SnRK2s, which in turn phosphorylate substrates involved in many cellular processes. The family of receptors appears functionally redundant. To observe a measurable phenotype, four of the fourteen receptors have to be mutated to create a multilocus loss-of-function quadruple receptor (QR) mutant, which is much less sensitive to ABA than wild-type (WT) plants. Given these phenotypes, we asked whether or not a difference in ABA response between the WT and QR backgrounds would manifest on a phosphorylation level as well. We tested WT and QR mutant ABA response using isotope-assisted quantitative phosphoproteomics to determine what ABA-induced phosphorylation changes occur in WT plants within 5 min of ABA treatment and how that phosphorylation pattern is altered in the QR mutant. We found multiple ABA-induced phosphorylation changes that occur within 5 min of treatment, including three SnRK2 autophosphorylation events and phosphorylation on SnRK2 substrates. The majority of robust ABA-dependent phosphorylation changes observed were partially diminished in the QR mutant, whereas many smaller ABA-dependent phosphorylation changes observed in the WT were not responsive to ABA in the mutant. A single phosphorylation event was increased in response to ABA treatment in both the WT and QR mutant. A portion of the discovery data was validated using selected reaction monitoring-based targeted measurements on a triple quadrupole mass spectrometer. These data suggest that different subsets of phosphorylation events depend upon different subsets of the ABA receptor family to occur. Altogether, these data expand our understanding of the model by which the family of ABA receptors directs rapid phosphoproteomic changes. PMID:25693798
Farm Mapping to Assist, Protect, and Prepare Emergency Responders: Farm MAPPER.
Reyes, Iris; Rollins, Tami; Mahnke, Andrea; Kadolph, Christopher; Minor, Gerald; Keifer, Matthew
2014-01-01
Responders such as firefighters and emergency medical technicians who respond to farm emergencies often face complex and unknown environments. They may encounter hazards such as fuels, solvents, pesticides, caustics, and exploding gas storage cylinders. Responders may be unaware of dirt roads within the farm that can expedite their arrival at critical sites or snow-covered manure pits that act as hidden hazards. A response to a farm, unless guided by someone familiar with the operation, may present a risk to responders and post a challenge in locating the victim. This project explored the use of a Web-based farm-mapping application optimized for tablets and accessible via easily accessible on-site matrix barcodes, or quick response codes (QR codes), to provide emergency responders with hazard and resource information to agricultural operations. Secured portals were developed for both farmers and responders, allowing both parties to populate and customize farm maps with icons. Data were stored online and linked to QR codes attached to mailbox posts where emergency responders may read them with a mobile device. Mock responses were conducted on dairy farms to test QR code linking efficacy, Web site security, and field usability. Findings from farmer usability tests showed willingness to enter data as well as ease of Web site navigation and data entry even with farmers who had limited computer knowledge. Usability tests with emergency responders showed ease of QR code connectivity to the farm maps and ease of Web site navigation. Further research is needed to improve data security as well as assess the program's applicability to nonfarm environments and integration with existing emergency response systems. The next phases of this project will expand the program for regional and national use, develop QR code-linked, Web-based extrication guidance for farm machinery for victim entrapment rescue, and create QR code-linked online training videos and materials for limited English proficient immigrant farm workers.
Ho, ThienLuan; Oh, Seung-Rohk
2017-01-01
Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs). In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively. PMID:29016700
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
Parallel grid generation algorithm for distributed memory computers
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Moitra, Anutosh
1994-01-01
A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.
Parallel processing considerations for image recognition tasks
NASA Astrophysics Data System (ADS)
Simske, Steven J.
2011-01-01
Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.
A privacy-preserving parallel and homomorphic encryption scheme
NASA Astrophysics Data System (ADS)
Min, Zhaoe; Yang, Geng; Shi, Jingqi
2017-04-01
In order to protect data privacy whilst allowing efficient access to data in multi-nodes cloud environments, a parallel homomorphic encryption (PHE) scheme is proposed based on the additive homomorphism of the Paillier encryption algorithm. In this paper we propose a PHE algorithm, in which plaintext is divided into several blocks and blocks are encrypted with a parallel mode. Experiment results demonstrate that the encryption algorithm can reach a speed-up ratio at about 7.1 in the MapReduce environment with 16 cores and 4 nodes.
NASA Astrophysics Data System (ADS)
Byun, Hye Suk; El-Naggar, Mohamed Y.; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2017-10-01
Kinetic Monte Carlo (KMC) simulations are used to study long-time dynamics of a wide variety of systems. Unfortunately, the conventional KMC algorithm is not scalable to larger systems, since its time scale is inversely proportional to the simulated system size. A promising approach to resolving this issue is the synchronous parallel KMC (SPKMC) algorithm, which makes the time scale size-independent. This paper introduces a formal derivation of the SPKMC algorithm based on local transition-state and time-dependent Hartree approximations, as well as its scalable parallel implementation based on a dual linked-list cell method. The resulting algorithm has achieved a weak-scaling parallel efficiency of 0.935 on 1024 Intel Xeon processors for simulating biological electron transfer dynamics in a 4.2 billion-heme system, as well as decent strong-scaling parallel efficiency. The parallel code has been used to simulate a lattice of cytochrome complexes on a bacterial-membrane nanowire, and it is broadly applicable to other problems such as computational synthesis of new materials.
NASA Technical Reports Server (NTRS)
Sargent, Jeff Scott
1988-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.
Parallel algorithms for mapping pipelined and parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1988-01-01
Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.
Student Growth Percentiles Based on MIRT: Implications of Calibrated Projection. CRESST Report 842
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li; Choi, Kilchan
2014-01-01
This research concerns a new proposal for calculating student growth percentiles (SGP, Betebenner, 2009). In Betebenner (2009), quantile regression (QR) is used to estimate the SGPs. However, measurement error in the score estimates, which always exists in practice, leads to bias in the QR-based estimates (Shang, 2012). One way to address this…
Teacher Candidates Implementing Universal Design for Learning: Enhancing Picture Books with QR Codes
ERIC Educational Resources Information Center
Grande, Marya; Pontrello, Camille
2016-01-01
The purpose of this study was to investigate if teacher candidates could gain knowledge of the principles of Universal Design for Learning by enhancing traditional picture books with Quick Response (QR) codes and to determine if the process of making these enhancements would impact teacher candidates' comfort levels with using technology on both…
Evaluating QR Code Case Studies Using a Mobile Learning Framework
ERIC Educational Resources Information Center
Rikala, Jenni
2014-01-01
The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…
Investigating the Use of Quick Response Codes in the Gross Anatomy Laboratory
ERIC Educational Resources Information Center
Traser, Courtney J.; Hoffman, Leslie A.; Seifert, Mark F.; Wilson, Adam B.
2015-01-01
The use of quick response (QR) codes within undergraduate university courses is on the rise, yet literature concerning their use in medical education is scant. This study examined student perceptions on the usefulness of QR codes as learning aids in a medical gross anatomy course, statistically analyzed whether this learning aid impacted student…
QR Codes: Taking Collections Further
ERIC Educational Resources Information Center
Ahearn, Caitlin
2014-01-01
With some thought and direction, QR (quick response) codes are a great tool to use in school libraries to enhance access to information. From March through April 2013, Caitlin Ahearn interned at Sanborn Regional High School (SRHS) under the supervision of Pam Harland. As a result of Harland's un-Deweying of the nonfiction collection at SRHS,…
Experiencing Teaching and Learning Quantitative Reasoning in a Project-Based Context
ERIC Educational Resources Information Center
Muir, Tracey; Beswick, Kim; Callingham, Rosemary; Jade, Katara
2016-01-01
This paper presents the findings of a small-scale study that investigated the issues and challenges of teaching and learning about quantitative reasoning (QR) within a project-based learning (PjBL) context. Students and teachers were surveyed and interviewed about their experiences of learning and teaching QR in that context in contrast to…
ERIC Educational Resources Information Center
Kashyap, Upasana; Mathew, Santhosh
2017-01-01
The purpose of this study was to compare students' performances in a freshmen level quantitative reasoning course (QR) under three different instructional models. A cohort of 155 freshmen students was placed in one of the three models: needing a prerequisite course, corequisite (students enroll simultaneously in QR course and a course that…
A quantum rings based on multiple quantum wells for 1.2-2.8 THz detection
NASA Astrophysics Data System (ADS)
Mobini, Alireza; Solaimani, M.
2018-07-01
In this paper optical properties of a new QR based on MQWs have been investigated for detection in the THz range. The QR composed of a periodic effective quantum sites that each one considered as QW in theta direction. Using Tight binding method, eigen value problem for a QR with circumstance of 100 nm number with different number of wells i.e. 2, 4, 6 and 8 are solved and the absorption spectrum have been calculated. The results show that absorption has maximum value in range of (1.2-2.88 THz) that can be used for THz detection. Finally, it is realized that by increasing the number of wells, the numbers of absorption line also increase.
Experimental scrambling and noise reduction applied to the optical encryption of QR codes.
Barrera, John Fredy; Vélez, Alejandro; Torroba, Roberto
2014-08-25
In this contribution, we implement two techniques to reinforce optical encryption, which we restrict in particular to the QR codes, but could be applied in a general encoding situation. To our knowledge, we present the first experimental-positional optical scrambling merged with an optical encryption procedure. The inclusion of an experimental scrambling technique in an optical encryption protocol, in particular dealing with a QR code "container", adds more protection to the encoding proposal. Additionally, a nonlinear normalization technique is applied to reduce the noise over the recovered images besides increasing the security against attacks. The opto-digital techniques employ an interferometric arrangement and a joint transform correlator encrypting architecture. The experimental results demonstrate the capability of the methods to accomplish the task.
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database
Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; ...
2013-01-01
Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order in amore » 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less
Parallelization of a blind deconvolution algorithm
NASA Astrophysics Data System (ADS)
Matson, Charles L.; Borelli, Kathy J.
2006-09-01
Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moryakov, A. V., E-mail: sailor@orc.ru
2016-12-15
An algorithm for solving the linear Cauchy problem for large systems of ordinary differential equations is presented. The algorithm for systems of first-order differential equations is implemented in the EDELWEISS code with the possibility of parallel computations on supercomputers employing the MPI (Message Passing Interface) standard for the data exchange between parallel processes. The solution is represented by a series of orthogonal polynomials on the interval [0, 1]. The algorithm is characterized by simplicity and the possibility to solve nonlinear problems with a correction of the operator in accordance with the solution obtained in the previous iterative process.
Parallel, stochastic measurement of molecular surface area.
Juba, Derek; Varshney, Amitabh
2008-08-01
Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited. We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy.
NASA Technical Reports Server (NTRS)
Weeks, Cindy Lou
1986-01-01
Experiments were conducted at NASA Ames Research Center to define multi-tasking software requirements for multiple-instruction, multiple-data stream (MIMD) computer architectures. The focus was on specifying solutions for algorithms in the field of computational fluid dynamics (CFD). The program objectives were to allow researchers to produce usable parallel application software as soon as possible after acquiring MIMD computer equipment, to provide researchers with an easy-to-learn and easy-to-use parallel software language which could be implemented on several different MIMD machines, and to enable researchers to list preferred design specifications for future MIMD computer architectures. Analysis of CFD algorithms indicated that extensions of an existing programming language, adaptable to new computer architectures, provided the best solution to meeting program objectives. The CoFORTRAN Language was written in response to these objectives and to provide researchers a means to experiment with parallel software solutions to CFD algorithms on machines with parallel architectures.
Parallel algorithm for determining motion vectors in ice floe images by matching edge features
NASA Technical Reports Server (NTRS)
Manohar, M.; Ramapriyan, H. K.; Strong, J. P.
1988-01-01
A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.
A Parallel Point Matching Algorithm for Landmark Based Image Registration Using Multicore Platform
Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.
2013-01-01
Point matching is crucial for many computer vision applications. Establishing the correspondence between a large number of data points is a computationally intensive process. Some point matching related applications, such as medical image registration, require real time or near real time performance if applied to critical clinical applications like image assisted surgery. In this paper, we report a new multicore platform based parallel algorithm for fast point matching in the context of landmark based medical image registration. We introduced a non-regular data partition algorithm which utilizes the K-means clustering algorithm to group the landmarks based on the number of available processing cores, which optimize the memory usage and data transfer. We have tested our method using the IBM Cell Broadband Engine (Cell/B.E.) platform. The results demonstrated a significant speed up over its sequential implementation. The proposed data partition and parallelization algorithm, though tested only on one multicore platform, is generic by its design. Therefore the parallel algorithm can be extended to other computing platforms, as well as other point matching related applications. PMID:24308014
Implementation and analysis of a Navier-Stokes algorithm on parallel computers
NASA Technical Reports Server (NTRS)
Fatoohi, Raad A.; Grosch, Chester E.
1988-01-01
The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.
Solving very large, sparse linear systems on mesh-connected parallel computers
NASA Technical Reports Server (NTRS)
Opsahl, Torstein; Reif, John
1987-01-01
The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.
Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho
2014-01-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299
Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho
2014-11-01
The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.
A Parallel Saturation Algorithm on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Ezekiel, Jonathan; Siminiceanu
2007-01-01
Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.
2009-08-01
Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisionsmore » are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.« less
A Parallel Compact Multi-Dimensional Numerical Algorithm with Aeroacoustics Applications
NASA Technical Reports Server (NTRS)
Povitsky, Alex; Morris, Philip J.
1999-01-01
In this study we propose a novel method to parallelize high-order compact numerical algorithms for the solution of three-dimensional PDEs (Partial Differential Equations) in a space-time domain. For this numerical integration most of the computer time is spent in computation of spatial derivatives at each stage of the Runge-Kutta temporal update. The most efficient direct method to compute spatial derivatives on a serial computer is a version of Gaussian elimination for narrow linear banded systems known as the Thomas algorithm. In a straightforward pipelined implementation of the Thomas algorithm processors are idle due to the forward and backward recurrences of the Thomas algorithm. To utilize processors during this time, we propose to use them for either non-local data independent computations, solving lines in the next spatial direction, or local data-dependent computations by the Runge-Kutta method. To achieve this goal, control of processor communication and computations by a static schedule is adopted. Thus, our parallel code is driven by a communication and computation schedule instead of the usual "creative, programming" approach. The obtained parallelization speed-up of the novel algorithm is about twice as much as that for the standard pipelined algorithm and close to that for the explicit DRP algorithm.
Relation of Parallel Discrete Event Simulation algorithms with physical models
NASA Astrophysics Data System (ADS)
Shchur, L. N.; Shchur, L. V.
2015-09-01
We extend concept of local simulation times in parallel discrete event simulation (PDES) in order to take into account architecture of the current hardware and software in high-performance computing. We shortly review previous research on the mapping of PDES on physical problems, and emphasise how physical results may help to predict parallel algorithms behaviour.
Highly parallel sparse Cholesky factorization
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Schreiber, Robert
1990-01-01
Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.
Mining algorithm for association rules in big data based on Hadoop
NASA Astrophysics Data System (ADS)
Fu, Chunhua; Wang, Xiaojing; Zhang, Lijun; Qiao, Liying
2018-04-01
In order to solve the problem that the traditional association rules mining algorithm has been unable to meet the mining needs of large amount of data in the aspect of efficiency and scalability, take FP-Growth as an example, the algorithm is realized in the parallelization based on Hadoop framework and Map Reduce model. On the basis, it is improved using the transaction reduce method for further enhancement of the algorithm's mining efficiency. The experiment, which consists of verification of parallel mining results, comparison on efficiency between serials and parallel, variable relationship between mining time and node number and between mining time and data amount, is carried out in the mining results and efficiency by Hadoop clustering. Experiments show that the paralleled FP-Growth algorithm implemented is able to accurately mine frequent item sets, with a better performance and scalability. It can be better to meet the requirements of big data mining and efficiently mine frequent item sets and association rules from large dataset.
A heuristic for suffix solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilgory, A.; Gajski, D.D.
1986-01-01
The suffix problem has appeared in solutions of recurrence systems for parallel and pipelined machines and more recently in the design of gate and silicon compilers. In this paper the authors present two algorithms. The first algorithm generates parallel suffix solutions with minimum cost for a given length, time delay, availability of initial values, and fanout. This algorithm generates a minimal solution for any length n and depth range log/sub 2/ N to N. The second algorithm reduces the size of the solutions generated by the first algorithm.
Chen, C-Y Oliver; Blumberg, Jeffrey B
2008-06-25
Observational studies and clinical trials suggest nut intake, including almonds, is associated with an enhancement in antioxidant defense and a reduction in the risk of cancer and cardiovascular disease. Almond skins are rich in polyphenols (ASP) that may contribute to these putative benefits. To assess their potential mechanisms of action, we tested the in vitro effect of ASP extracted with methanol (M) or a gastrointestinal juice mimic (GI) alone or in combination with vitamins C (VC) or E (VE) (1-10 micromol/L) on scavenging free radicals and inducing quinone reductase (QR). Flavonoid profiles from ASP-M and -GI extracts were different from one another. ASP-GI was more potent in scavenging HOCl and ONOO (-) radicals than ASP-M. In contrast, ASP-M increased and ASP-GI decreased QR activity in Hepa1c1c7 cells. Adding VC or VE to ASP produced a combination- and dose-dependent action on radical scavenging and QR induction. In comparison to their independent actions, ASP-M plus VC were less potent in scavenging DPPH, HOCl, ONOO (-), and O 2 (-) (*). However, the interaction between ASP-GI plus VC promoted their radical scavenging activity. Combining ASP-M plus VC resulted in a synergistic interaction, inducing QR activity, but ASP-GI plus VC had an antagonistic effect. On the basis of their total phenolic content, the measures of total antioxidant activity of ASP-M and -GI were comparable. Thus, in vitro, ASP act as antioxidants and induce QR activity, but these actions are dependent upon their dose, method of extraction, and interaction with antioxidant vitamins.
Roe, Erin D; Chamarthi, Bindu; Raskin, Philip
2015-01-01
The concurrent use of a postprandial insulin sensitizing agent, such as bromocriptine-QR, a quick release formulation of bromocriptine, a dopamine D2 receptor agonist, may offer a strategy to improve glycemic control and limit/reduce insulin requirement in type 2 diabetes (T2DM) patients on high-dose insulin. This open label pilot study evaluated this potential utility of bromocriptine-QR. Ten T2DM subjects on metformin (1-2 gm/day) and high-dose (TDID ≥ 65 U/day) basal-bolus insulin were enrolled to receive once daily (morning) bromocriptine-QR (1.6-4.8 mg/day) for 24 weeks. Subjects with at least one postbaseline HbA1c measurement (N = 8) were analyzed for change from baseline HbA(1c), TDID, and postprandial glucose area under the curve of a four-hour mixed meal tolerance test (MMTT). Compared to the baseline, average HbA1c decreased 1.76% (9.74 ± 0.56 to 7.98 ± 0.36, P = 0.01), average TDID decreased 27% (199 ± 33 to 147 ± 31, P = 0.009), and MMTT AUC(60-240) decreased 32% (P = 0.04) over the treatment period. The decline in HbA(1c) and TDID was observed at 8 weeks and sustained over the remaining 16-week study duration. In this study, bromocriptine-QR therapy improved glycemic control and meal tolerance while reducing insulin requirement in T2DM subjects poorly controlled on high-dose insulin therapy.
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack
NASA Astrophysics Data System (ADS)
Nalegaev, S. S.; Petrov, N. V.
Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.
Breaking the Code: The Creative Use of QR Codes to Market Extension Events
ERIC Educational Resources Information Center
Hill, Paul; Mills, Rebecca; Peterson, GaeLynn; Smith, Janet
2013-01-01
The use of smartphones has drastically increased in recent years, heralding an explosion in the use of QR codes. The black and white square barcodes that link the physical and digital world are everywhere. These simple codes can provide many opportunities to connect people in the physical world with many of Extension online resources. The…
QR encoded smart oral dosage forms by inkjet printing.
Edinger, Magnus; Bar-Shalom, Daniel; Sandler, Niklas; Rantanen, Jukka; Genina, Natalja
2018-01-30
The use of inkjet printing (IJP) technology enables the flexible manufacturing of personalized medicine with the doses tailored for each patient. In this study we demonstrate, for the first time, the applicability of IJP in the production of edible dosage forms in the pattern of a quick response (QR) code. This printed pattern contains the drug itself and encoded information relevant to the patient and/or healthcare professionals. IJP of the active pharmaceutical ingredient (API)-containing ink in the pattern of QR code was performed onto a newly developed porous and flexible, but mechanically stable substrate with a good absorption capacity. The printing did not affect the mechanical properties of the substrate. The actual drug content of the printed dosage forms was in accordance with the encoded drug content. The QR encoded dosage forms had a good print definition without significant edge bleeding. They were readable by a smartphone even after storage in harsh conditions. This approach of efficient data incorporation and data storage combined with the use of smart devices can lead to safer and more patient-friendly drug products in the future. Copyright © 2017 Elsevier B.V. All rights reserved.
PCR-RFLP genotypes associated with quinolone resistance in isolates of Flavobacterium psychrophilum.
Izumi, S; Ouchi, S; Kuge, T; Arai, H; Mito, T; Fujii, H; Aranishi, F; Shimizu, A
2007-03-01
A novel genotyping method for epizootiological studies of bacterial cold-water disease caused by Flavobacterium psychrophilum and associated with quinolone resistance was developed. Polymerase chain reaction followed by restriction fragment length polymorphism (PCR-RFLP) was performed on 244 F. psychrophilum isolates from various fish species. PCR was performed with primer pair GYRA-FP1F and GYRA-FP1R amplifying the A subunit of the DNA gyrase (GyrA) gene, which contained the quinolone resistance determining region. Digestion of PCR products with the restriction enzyme Mph1103I showed two genotypes, QR and QS. The difference between these genotypes was amino acid substitutions at position 83 of GyrA (Escherichia coli numbering). The genotype QR indicated an alanine residue at this position associated with quinolone resistance in F. psychrophilum isolates. Of the 244 isolates tested in this study, the number of QR genotype isolates was 153 (62.7%). In isolates from ayu (n=177), 146 (82.5%) were genotype QR. With combination of this technique and previously reported PCR-RFLP genotyping, eight genotypes were observed in F. psychrophilum isolates. Using this genotyping system, the relationships between genotype and host fish species, or locality of isolation, were analysed and are discussed.
Wang, Xiangming; Zhou, Fanli; Lv, Sijing; Yi, Peishan; Zhu, Zhiwen; Yang, Yihong; Feng, Guoxin; Li, Wei; Ou, Guangshuo
2013-01-01
Directional cell migration is a fundamental process in neural development. In Caenorhabditis elegans, Q neuroblasts on the left (QL) and right (QR) sides of the animal generate cells that migrate in opposite directions along the anteroposterior body axis. The homeobox (Hox) gene lin-39 promotes the anterior migration of QR descendants (QR.x), whereas the canonical Wnt signaling pathway activates another Hox gene, mab-5, to ensure the QL descendants’ (QL.x) posterior migration. However, the regulatory targets of LIN-39 and MAB-5 remain elusive. Here, we showed that MIG-13, an evolutionarily conserved transmembrane protein, cell-autonomously regulates the asymmetric distribution of the actin cytoskeleton in the leading migratory edge. We identified mig-13 as a cellular target of LIN-39 and MAB-5. LIN-39 establishes QR.x anterior polarity by binding to the mig-13 promoter and promoting mig-13 expression, whereas MAB-5 inhibits QL.x anterior polarity by associating with the lin-39 promoter and downregulating lin-39 and mig-13 expression. Thus, MIG-13 links the Wnt signaling and Hox genes that guide migrations, to the actin cytoskeleton, which executes the motility response in neuronal migration. PMID:23784779
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
NASA Astrophysics Data System (ADS)
Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.
2016-12-01
New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Integrating the ODI-PPA scientific gateway with the QuickReduce pipeline for on-demand processing
NASA Astrophysics Data System (ADS)
Young, Michael D.; Kotulla, Ralf; Gopu, Arvind; Liu, Wilson
2014-07-01
As imaging systems improve, the size of astronomical data has continued to grow, making the transfer and processing of data a significant burden. To solve this problem for the WIYN Observatory One Degree Imager (ODI), we developed the ODI-Portal, Pipeline, and Archive (ODI-PPA) science gateway, integrating the data archive, data reduction pipelines, and a user portal. In this paper, we discuss the integration of the QuickReduce (QR) pipeline into PPA's Tier 2 processing framework. QR is a set of parallelized, stand-alone Python routines accessible to all users, and operators who can create master calibration products and produce standardized calibrated data, with a short turn-around time. Upon completion, the data are ingested into the archive and portal, and made available to authorized users. Quality metrics and diagnostic plots are generated and presented via the portal for operator approval and user perusal. Additionally, users can tailor the calibration process to their specific science objective(s) by selecting custom datasets, applying preferred master calibrations or generating their own, and selecting pipeline options. Submission of a QuickReduce job initiates data staging, pipeline execution, and ingestion of output data products all while allowing the user to monitor the process status, and to download or further process/analyze the output within the portal. User-generated data products are placed into a private user-space within the portal. ODI-PPA leverages cyberinfrastructure at Indiana University including the Big Red II supercomputer, the Scholarly Data Archive tape system and the Data Capacitor shared file system.
Customizing FP-growth algorithm to parallel mining with Charm++ library
NASA Astrophysics Data System (ADS)
Puścian, Marek
2017-08-01
This paper presents a frequent item mining algorithm that was customized to handle growing data repositories. The proposed solution applies Master Slave scheme to frequent pattern growth technique. Efficient utilization of available computation units is achieved by dynamic reallocation of tasks. Conditional frequent trees are assigned to parallel workers basing on their workload. Proposed enhancements have been successfully implemented using Charm++ library. This paper discusses results of the performance of parallelized FP-growth algorithm against different datasets. The approach has been illustrated with many experiments and measurements performed using multiprocessor and multithreaded computer.
Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.
Higginson, J S; Neptune, R R; Anderson, F C
2005-09-01
Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.
A GPU-paralleled implementation of an enhanced face recognition algorithm
NASA Astrophysics Data System (ADS)
Chen, Hao; Liu, Xiyang; Shao, Shuai; Zan, Jiguo
2013-03-01
Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is expensive and has become a main restricting factor for real world applications. In this paper, we introduce a GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA, implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a significant speedup over the traditional CPU implementations.
Conjugate-Gradient Algorithms For Dynamics Of Manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1993-01-01
Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.
Rekadwad, Bhagwan N; Khobragade, Chandrahasya N
2016-03-01
16S rRNA sequences of morphologically and biochemically identified 21 thermophilic bacteria isolated from Unkeshwar hot springs (19°85'N and 78°25'E), Dist. Nanded (India) has been deposited in NCBI repository. The 16S rRNA gene sequences were used to generate QR codes for sequences (FASTA format and full Gene Bank information). Diversity among the isolates is compared with known isolates and evaluated using CGR, FCGR and PCA i.e. visual comparison and evaluation respectively. Considerable biodiversity was observed among the identified bacteria isolated from Unkeshwar hot springs. The hyperlinked QR codes, CGR, FCGR and PCA of all the isolates are made available to the users on a portal https://sites.google.com/site/bhagwanrekadwad/.
NASA Astrophysics Data System (ADS)
Bylaska, Eric J.; Weare, Jonathan Q.; Weare, John H.
2013-08-01
Parallel in time simulation algorithms are presented and applied to conventional molecular dynamics (MD) and ab initio molecular dynamics (AIMD) models of realistic complexity. Assuming that a forward time integrator, f (e.g., Verlet algorithm), is available to propagate the system from time ti (trajectory positions and velocities xi = (ri, vi)) to time ti + 1 (xi + 1) by xi + 1 = fi(xi), the dynamics problem spanning an interval from t0…tM can be transformed into a root finding problem, F(X) = [xi - f(x(i - 1)]i = 1, M = 0, for the trajectory variables. The root finding problem is solved using a variety of root finding techniques, including quasi-Newton and preconditioned quasi-Newton schemes that are all unconditionally convergent. The algorithms are parallelized by assigning a processor to each time-step entry in the columns of F(X). The relation of this approach to other recently proposed parallel in time methods is discussed, and the effectiveness of various approaches to solving the root finding problem is tested. We demonstrate that more efficient dynamical models based on simplified interactions or coarsening time-steps provide preconditioners for the root finding problem. However, for MD and AIMD simulations, such preconditioners are not required to obtain reasonable convergence and their cost must be considered in the performance of the algorithm. The parallel in time algorithms developed are tested by applying them to MD and AIMD simulations of size and complexity similar to those encountered in present day applications. These include a 1000 Si atom MD simulation using Stillinger-Weber potentials, and a HCl + 4H2O AIMD simulation at the MP2 level. The maximum speedup (serial execution time/parallel execution time) obtained by parallelizing the Stillinger-Weber MD simulation was nearly 3.0. For the AIMD MP2 simulations, the algorithms achieved speedups of up to 14.3. The parallel in time algorithms can be implemented in a distributed computing environment using very slow transmission control protocol/Internet protocol networks. Scripts written in Python that make calls to a precompiled quantum chemistry package (NWChem) are demonstrated to provide an actual speedup of 8.2 for a 2.5 ps AIMD simulation of HCl + 4H2O at the MP2/6-31G* level. Implemented in this way these algorithms can be used for long time high-level AIMD simulations at a modest cost using machines connected by very slow networks such as WiFi, or in different time zones connected by the Internet. The algorithms can also be used with programs that are already parallel. Using these algorithms, we are able to reduce the cost of a MP2/6-311++G(2d,2p) simulation that had reached its maximum possible speedup in the parallelization of the electronic structure calculation from 32 s/time step to 6.9 s/time step.
A parallel algorithm for generation and assembly of finite element stiffness and mass matrices
NASA Technical Reports Server (NTRS)
Storaasli, O. O.; Carmona, E. A.; Nguyen, D. T.; Baddourah, M. A.
1991-01-01
A new algorithm is proposed for parallel generation and assembly of the finite element stiffness and mass matrices. The proposed assembly algorithm is based on a node-by-node approach rather than the more conventional element-by-element approach. The new algorithm's generality and computation speed-up when using multiple processors are demonstrated for several practical applications on multi-processor Cray Y-MP and Cray 2 supercomputers.
Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor
2010-05-01
Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal
On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh
2014-07-01
We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for themore » parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.« less
Implementation of a parallel protein structure alignment service on cloud.
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.
Implementation of a Parallel Protein Structure Alignment Service on Cloud
Hung, Che-Lun; Lin, Yaw-Ling
2013-01-01
Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform. PMID:23671842
NASA Astrophysics Data System (ADS)
Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran
2017-03-01
In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.
The dynamics of z ~ 1 clusters of galaxies from the GCLASS survey
NASA Astrophysics Data System (ADS)
Biviano, A.; van der Burg, R. F. J.; Muzzin, A.; Sartoris, B.; Wilson, G.; Yee, H. K. C.
2016-10-01
Context. The dynamics of clusters of galaxies and its evolution provide information on their formation and growth, on the nature of dark matter and on the evolution of the baryonic components. Poor observational constraints exist so far on the dynamics of clusters at redshift z > 0.8. Aims: We aim to constrain the internal dynamics of clusters of galaxies at redshift z ~ 1, namely their mass profile M(r), velocity anisotropy profile β(r), and pseudo-phase-space density profiles Q(r) and Qr(r), obtained from the ratio between the mass density profile and the third power of the (total and, respectively, radial) velocity dispersion profiles of cluster galaxies. Methods: We used the spectroscopic and photometric data-set of 10 clusters at 0.87 < z < 1.34 from the Gemini Cluster Astrophysics Spectroscopic Survey (GCLASS). We determined the individual cluster masses from their velocity dispersions, then stack the clusters in projected phase-space. We investigated the internal dynamics of this stack cluster, using the spatial and velocity distribution of its member galaxies. We determined the stack cluster M(r) using the MAMPOSSt method, and its β(r) by direct inversion of the Jeans equation. The procedures used to determine the two aforementioned profiles also allowed us to determine Q(r) and Qr(r). Results: Several M(r) models are statistically acceptable for the stack cluster (Burkert, Einasto, Hernquist, NFW). The stack cluster total mass concentration, c ≡ r200/r-2 = 4.0-0.6+1.0, is in agreement with theoretical expectations. The total mass distribution is less concentrated than both the cluster stellar-mass and the cluster galaxies distributions. The stack cluster β(r) indicates that galaxy orbits are isotropic near the cluster center and become increasingly radially elongated with increasing cluster-centric distance. Passive and star-forming galaxies have similar β(r). The observed β(r) is similar to that of dark matter particles in simulated cosmological halos. Q(r) and Qr(r) are almost power-law relations with slopes similar to those predicted from numerical simulations of dark matter halos. Conclusions: Comparing our results with those obtained for lower-redshift clusters, we conclude that the evolution of the concentration-total mass relation and pseudo-phase-space density profiles agree with the expectations from ΛCDM cosmological simulations. The fact that Q(r) and Qr(r) already follow the theoretical expectations in z ~ 1 clusters suggest these profiles are the result of rapid dynamical relaxation processes, such as violent relaxation. The different concentrations of the total and stellar mass distribution, and their subsequent evolution, can be explained by merging processes of central galaxies leading to the formation of the brightest cluster galaxy. The orbits of passive cluster galaxies appear to become more isotropic with time, while those of star-forming galaxies do not evolve, presumably because star-formation is quenched on a shorter timescale than that required for orbital isotropization.
NETRA: A parallel architecture for integrated vision systems. 1: Architecture and organization
NASA Technical Reports Server (NTRS)
Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra
1989-01-01
Computer vision is regarded as one of the most complex and computationally intensive problems. An integrated vision system (IVS) is considered to be a system that uses vision algorithms from all levels of processing for a high level application (such as object recognition). A model of computation is presented for parallel processing for an IVS. Using the model, desired features and capabilities of a parallel architecture suitable for IVSs are derived. Then a multiprocessor architecture (called NETRA) is presented. This architecture is highly flexible without the use of complex interconnection schemes. The topology of NETRA is recursively defined and hence is easily scalable from small to large systems. Homogeneity of NETRA permits fault tolerance and graceful degradation under faults. It is a recursively defined tree-type hierarchical architecture where each of the leaf nodes consists of a cluster of processors connected with a programmable crossbar with selective broadcast capability to provide for desired flexibility. A qualitative evaluation of NETRA is presented. Then general schemes are described to map parallel algorithms onto NETRA. Algorithms are classified according to their communication requirements for parallel processing. An extensive analysis of inter-cluster communication strategies in NETRA is presented, and parameters affecting performance of parallel algorithms when mapped on NETRA are discussed. Finally, a methodology to evaluate performance of algorithms on NETRA is described.
Applications of Parallel Computation in Micro-Mechanics and Finite Element Method
NASA Technical Reports Server (NTRS)
Tan, Hui-Qian
1996-01-01
This project discusses the application of parallel computations related with respect to material analyses. Briefly speaking, we analyze some kind of material by elements computations. We call an element a cell here. A cell is divided into a number of subelements called subcells and all subcells in a cell have the identical structure. The detailed structure will be given later in this paper. It is obvious that the problem is "well-structured". SIMD machine would be a better choice. In this paper we try to look into the potentials of SIMD machine in dealing with finite element computation by developing appropriate algorithms on MasPar, a SIMD parallel machine. In section 2, the architecture of MasPar will be discussed. A brief review of the parallel programming language MPL also is given in that section. In section 3, some general parallel algorithms which might be useful to the project will be proposed. And, combining with the algorithms, some features of MPL will be discussed in more detail. In section 4, the computational structure of cell/subcell model will be given. The idea of designing the parallel algorithm for the model will be demonstrated. Finally in section 5, a summary will be given.
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, F.; Morel, M.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.
Data decomposition method for parallel polygon rasterization considering load balancing
NASA Astrophysics Data System (ADS)
Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun
2015-12-01
It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.
Optical polarization properties of InAs/InP quantum dot and quantum rod nanowires.
Anufriev, Roman; Barakat, Jean-Baptiste; Patriarche, Gilles; Letartre, Xavier; Bru-Chevallier, Catherine; Harmand, Jean-Christophe; Gendry, Michel; Chauvin, Nicolas
2015-10-02
The emission polarization of single InAs/InP quantum dot (QD) and quantum rod (QR) nanowires is investigated at room temperature. Whereas the emission of the QRs is mainly polarized parallel to the nanowire axis, the opposite behavior is observed for the QDs. These optical properties can be explained by a combination of dielectric effects related to the nanowire geometry and to the configuration of the valence band in the nanostructure. A theoretical model and finite difference in time domain calculations are presented to describe the impact of the nanowire and the surroundings on the optical properties of the emitter. Using this model, the intrinsic degree of linear polarization of the two types of emitters is extracted. The strong polarization anisotropies indicate a valence band mixing in the QRs but not in the QDs.
A Massively Parallel Computational Method of Reading Index Files for SOAPsnv.
Zhu, Xiaoqian; Peng, Shaoliang; Liu, Shaojie; Cui, Yingbo; Gu, Xiang; Gao, Ming; Fang, Lin; Fang, Xiaodong
2015-12-01
SOAPsnv is the software used for identifying the single nucleotide variation in cancer genes. However, its performance is yet to match the massive amount of data to be processed. Experiments reveal that the main performance bottleneck of SOAPsnv software is the pileup algorithm. The original pileup algorithm's I/O process is time-consuming and inefficient to read input files. Moreover, the scalability of the pileup algorithm is also poor. Therefore, we designed a new algorithm, named BamPileup, aiming to improve the performance of sequential read, and the new pileup algorithm implemented a parallel read mode based on index. Using this method, each thread can directly read the data start from a specific position. The results of experiments on the Tianhe-2 supercomputer show that, when reading data in a multi-threaded parallel I/O way, the processing time of algorithm is reduced to 3.9 s and the application program can achieve a speedup up to 100×. Moreover, the scalability of the new algorithm is also satisfying.
QR Codes as Mobile Learning Tools for Labor Room Nurses at the San Pablo Colleges Medical Center
ERIC Educational Resources Information Center
Del Rosario-Raymundo, Maria Rowena
2017-01-01
Purpose: The purpose of this paper is to explore the use of QR codes as mobile learning tools and examine factors that impact on their usefulness, acceptability and feasibility in assisting the nurses' learning. Design/Methodology/Approach: Study participants consisted of 14 regular, full-time, board-certified LR nurses. Over a two-week period,…
ERIC Educational Resources Information Center
Gao, Yuan; Liu, Tzu-Chien; Paas, Fred
2016-01-01
This study compared the effects of effortless selection of target plants using quick respond (QR) code technology to effortful manual search and selection of target plants on learning about plants in a mobile device supported learning environment. In addition, it was investigated whether the effectiveness of the 2 selection methods was…
2013-06-01
with an EHR .................................................. 97 C. SWOT ANALYSIS OF USING QR CODES WITH THE NDC AND WITH EHRS...96 Figure 41. SWOT analysis ................................................................................... 99 xiii LIST OF...Coordinator for Health Information Technology OTC Over-the-Counter PHI Personal Health Information QR Quick Response SWOT Strengths, Weaknesses
A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.
Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao
2016-01-01
The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).
A scalable parallel algorithm for multiple objective linear programs
NASA Technical Reports Server (NTRS)
Wiecek, Malgorzata M.; Zhang, Hong
1994-01-01
This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
DOE Office of Scientific and Technical Information (OSTI.GOV)
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
Automated Handling of Garments for Pressing
1991-09-30
Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering
Efficient Scalable Median Filtering Using Histogram-Based Operations.
Green, Oded
2018-05-01
Median filtering is a smoothing technique for noise removal in images. While there are various implementations of median filtering for a single-core CPU, there are few implementations for accelerators and multi-core systems. Many parallel implementations of median filtering use a sorting algorithm for rearranging the values within a filtering window and taking the median of the sorted value. While using sorting algorithms allows for simple parallel implementations, the cost of the sorting becomes prohibitive as the filtering windows grow. This makes such algorithms, sequential and parallel alike, inefficient. In this work, we introduce the first software parallel median filtering that is non-sorting-based. The new algorithm uses efficient histogram-based operations. These reduce the computational requirements of the new algorithm while also accessing the image fewer times. We show an implementation of our algorithm for both the CPU and NVIDIA's CUDA supported graphics processing unit (GPU). The new algorithm is compared with several other leading CPU and GPU implementations. The CPU implementation has near perfect linear scaling with a speedup on a quad-core system. The GPU implementation is several orders of magnitude faster than the other GPU implementations for mid-size median filters. For small kernels, and , comparison-based approaches are preferable as fewer operations are required. Lastly, the new algorithm is open-source and can be found in the OpenCV library.
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1989-01-01
The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying
2013-12-01
Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.
Khan, Nazeer; Siddiqui, Junaid S; Baig-Ansari, Naila
2018-01-01
Background Growth charts are essential tools used by pediatricians as well as public health researchers in assessing and monitoring the well-being of pediatric populations. Development of these growth charts, especially for children above five years of age, is challenging and requires current anthropometric data and advanced statistical analysis. These growth charts are generally presented as a series of smooth centile curves. A number of modeling approaches are available for generating growth charts and applying these on national datasets is important for generating country-specific reference growth charts. Objective To demonstrate that quantile regression (QR) as a viable statistical approach to construct growth reference charts and to assess the applicability of the World Health Organization (WHO) 2007 growth standards to a large Pakistani population of school-going children. Methodology This is a secondary data analysis using anthropometric data of 9,515 students from a Pakistani survey conducted between 2007 and 2014 in four cities of Pakistan. Growth reference charts were created using QR as well as the LMS (Box-Cox transformation (L), the median (M), and the generalized coefficient of variation (S)) method and then compared with WHO 2007 growth standards. Results Centile values estimated by the LMS method and QR procedure had few differences. The centile values attained from QR procedure of BMI-for-age, weight-for-age, and height-for-age of Pakistani children were lower than the standard WHO 2007 centile. Conclusion QR should be considered as an alternative method to develop growth charts for its simplicity and lack of necessity to transform data. WHO 2007 standards are not suitable for Pakistani children. PMID:29632748
Iftikhar, Sundus; Khan, Nazeer; Siddiqui, Junaid S; Baig-Ansari, Naila
2018-02-02
Background Growth charts are essential tools used by pediatricians as well as public health researchers in assessing and monitoring the well-being of pediatric populations. Development of these growth charts, especially for children above five years of age, is challenging and requires current anthropometric data and advanced statistical analysis. These growth charts are generally presented as a series of smooth centile curves. A number of modeling approaches are available for generating growth charts and applying these on national datasets is important for generating country-specific reference growth charts. Objective To demonstrate that quantile regression (QR) as a viable statistical approach to construct growth reference charts and to assess the applicability of the World Health Organization (WHO) 2007 growth standards to a large Pakistani population of school-going children. Methodology This is a secondary data analysis using anthropometric data of 9,515 students from a Pakistani survey conducted between 2007 and 2014 in four cities of Pakistan. Growth reference charts were created using QR as well as the LMS (Box-Cox transformation (L), the median (M), and the generalized coefficient of variation (S)) method and then compared with WHO 2007 growth standards. Results Centile values estimated by the LMS method and QR procedure had few differences. The centile values attained from QR procedure of BMI-for-age, weight-for-age, and height-for-age of Pakistani children were lower than the standard WHO 2007 centile. Conclusion QR should be considered as an alternative method to develop growth charts for its simplicity and lack of necessity to transform data. WHO 2007 standards are not suitable for Pakistani children.
Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes
2016-01-01
Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793
Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes.
Chien, Tsair-Wei; Lin, Weir-Sen
2016-03-02
The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients' true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access.
Gruppen, Tonia; Smith, Molly; Ganss, Andrea
2012-01-01
In the National Athletic Trainers' Association position statement, "Acute Management of the Cervical Spine-Injured Athlete," the technique recommended for face-mask (FM) removal is one that "creates the least head and neck motion, is performed most quickly, is the least difficult, and carries the least chance of failure." Industrial and technological advances in football helmet design and FM attachment systems might influence the efficacy of emergency FM removal. To examine the removal times and success rates of the Quick Release (QR) Face Guard Attachment System (Riddell Sports, Inc, Elyria, OH) throughout and at the conclusion of 1 season of play by a National Collegiate Athletic Association Division III football team competing in the Midwest. Controlled laboratory study. College laboratory. A total of 69 randomly selected Revolution IQ (Riddell Sports, Inc) football helmets fitted with the QR system were used. Each helmet was secured to a spine board, and investigators attempted to remove both of the QR side clips from the helmet with the Riddell insertion tool. Dependent variables included total time for removal of both QR side clips from the FM and success rate for removal of both side clips. The overall success rate for removal of both clips was 94.8% (164/173), whereas the mean times for removal of both clips ranged from 9.92 ± 12.06 seconds to 16.65 ± 20.97 seconds over 4 trial sessions. We found no differences among mean times for trial sessions throughout the season of play among the same helmets or among different helmets (P > .05). Removal time and success rate of the Riddell QR were satisfactory during and after 1 season of play despite use in various temperatures and precipitation.
Roe, Erin D.; Chamarthi, Bindu; Raskin, Philip
2015-01-01
Background. The concurrent use of a postprandial insulin sensitizing agent, such as bromocriptine-QR, a quick release formulation of bromocriptine, a dopamine D2 receptor agonist, may offer a strategy to improve glycemic control and limit/reduce insulin requirement in type 2 diabetes (T2DM) patients on high-dose insulin. This open label pilot study evaluated this potential utility of bromocriptine-QR. Methods. Ten T2DM subjects on metformin (1-2 gm/day) and high-dose (TDID ≥ 65 U/day) basal-bolus insulin were enrolled to receive once daily (morning) bromocriptine-QR (1.6–4.8 mg/day) for 24 weeks. Subjects with at least one postbaseline HbA1c measurement (N = 8) were analyzed for change from baseline HbA1c, TDID, and postprandial glucose area under the curve of a four-hour mixed meal tolerance test (MMTT). Results. Compared to the baseline, average HbA1c decreased 1.76% (9.74 ± 0.56 to 7.98 ± 0.36, P = 0.01), average TDID decreased 27% (199 ± 33 to 147 ± 31, P = 0.009), and MMTT AUC60–240 decreased 32% (P = 0.04) over the treatment period. The decline in HbA1c and TDID was observed at 8 weeks and sustained over the remaining 16-week study duration. Conclusion. In this study, bromocriptine-QR therapy improved glycemic control and meal tolerance while reducing insulin requirement in T2DM subjects poorly controlled on high-dose insulin therapy. PMID:26060825
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Yong Pil; Han, Eun Hee; Choi, Jae Ho
2008-05-01
1-Furan-2-yl-3-pyridin-2-yl-propenone (FPP-3) is an anti-inflammatory agent with a propenone moiety and chemically synthesized recently. In this study, we examined the chemopreventive effect of FPP-3 on 7,12-dimethylbenz[a]anthracene (DMBA)-induced genotoxicity in MCF-7 cells. FPP-3 reduced the formation of the DMBA-DNA adduct. DMBA-induced CYP1A1 and CYP1B1 gene expression and enzyme activity were inhibited by FPP-3. It inhibited DMBA-induced aryl hydrocarbon receptor (AhR) transactivation and DMBA-inducible nuclear localization of the AhR. Induction of detoxifying phase II genes by chemopreventive agents represents a coordinated protective response against oxidative stress and neoplastic effects of carcinogens. Transcription factor NF-E2 related factor 2 (Nrf2) regulates antioxidant response elementmore » (ARE) of phase II detoxifying and antioxidant enzymes, such as glutathione S-transferase (GST) and NAD(P)H:quinone oxidoreductase (QR). FPP-3 increased the expression and enzymatic activity of GST and QR. Moreover, FPP-3 increased transcriptional activity of GST and QR. GST and QR induction and Nrf2 translocation by FPP-3 were blocked by the PKC inhibitor Goe6983, and the p38 inhibitor SB203580. These results reflected a partial role of PKC{delta} and p38 signaling in FPP-3-mediated GSTA and QR induction through nuclear translocation of Nrf2. Classically, chemopreventive agents either inhibit CYP metabolizing enzyme or induce phase II detoxifying enzymes. These results suggest that FPP-3 has a potent protective effect against DMBA-induced genotoxicity through modulating phase I and II enzymes and that it has potential as a chemopreventive agent.« less
Parallel Algorithms for Image Analysis.
1982-06-01
8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9
Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers
NASA Astrophysics Data System (ADS)
Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan
2018-04-01
The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1 × 2.1 × 2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D-MRI with high temporal and spatial resolution within short scan time for visualization of organ or tumor motion during free breathing. Further studies, e.g. the application of the method for radiotherapy planning are needed to investigate the clinical applicability and diagnostic value of the approach.
Efficient parallel resolution of the simplified transport equations in mixed-dual formulation
NASA Astrophysics Data System (ADS)
Barrault, M.; Lathuilière, B.; Ramet, P.; Roman, J.
2011-03-01
A reactivity computation consists of computing the highest eigenvalue of a generalized eigenvalue problem, for which an inverse power algorithm is commonly used. Very fine modelizations are difficult to treat for our sequential solver, based on the simplified transport equations, in terms of memory consumption and computational time. A first implementation of a Lagrangian based domain decomposition method brings to a poor parallel efficiency because of an increase in the power iterations [1]. In order to obtain a high parallel efficiency, we improve the parallelization scheme by changing the location of the loop over the subdomains in the overall algorithm and by benefiting from the characteristics of the Raviart-Thomas finite element. The new parallel algorithm still allows us to locally adapt the numerical scheme (mesh, finite element order). However, it can be significantly optimized for the matching grid case. The good behavior of the new parallelization scheme is demonstrated for the matching grid case on several hundreds of nodes for computations based on a pin-by-pin discretization.
Scalable Domain Decomposed Monte Carlo Particle Transport
NASA Astrophysics Data System (ADS)
O'Brien, Matthew Joseph
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation. The main algorithms we consider are: • Domain decomposition of constructive solid geometry: enables extremely large calculations in which the background geometry is too large to fit in the memory of a single computational node. • Load Balancing: keeps the workload per processor as even as possible so the calculation runs efficiently. • Global Particle Find: if particles are on the wrong processor, globally resolve their locations to the correct processor based on particle coordinate and background domain. • Visualizing constructive solid geometry, sourcing particles, deciding that particle streaming communication is completed and spatial redecomposition. These algorithms are some of the most important parallel algorithms required for domain decomposed Monte Carlo particle transport. We demonstrate that our previous algorithms were not scalable, prove that our new algorithms are scalable, and run some of the algorithms up to 2 million MPI processes on the Sequoia supercomputer.
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058