Sample records for affine projection algorithm

  1. An improved affine projection algorithm for active noise cancellation

    NASA Astrophysics Data System (ADS)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  2. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection

    NASA Astrophysics Data System (ADS)

    Liu, Jianming; Grant, Steven L.; Benesty, Jacob

    2015-12-01

    A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.

  3. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  4. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  5. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  6. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    PubMed

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  7. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  8. 2D Affine and Projective Shape Analysis.

    PubMed

    Bryner, Darshan; Klassen, Eric; Huiling Le; Srivastava, Anuj

    2014-05-01

    Current techniques for shape analysis tend to seek invariance to similarity transformations (rotation, translation, and scale), but certain imaging situations require invariance to larger groups, such as affine or projective groups. Here we present a general Riemannian framework for shape analysis of planar objects where metrics and related quantities are invariant to affine and projective groups. Highlighting two possibilities for representing object boundaries-ordered points (or landmarks) and parameterized curves-we study different combinations of these representations (points and curves) and transformations (affine and projective). Specifically, we provide solutions to three out of four situations and develop algorithms for computing geodesics and intrinsic sample statistics, leading up to Gaussian-type statistical models, and classifying test shapes using such models learned from training data. In the case of parameterized curves, we also achieve the desired goal of invariance to re-parameterizations. The geodesics are constructed by particularizing the path-straightening algorithm to geometries of current manifolds and are used, in turn, to compute shape statistics and Gaussian-type shape models. We demonstrate these ideas using a number of examples from shape and activity recognition.

  9. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  10. A Variable Step-Size Proportionate Affine Projection Algorithm for Identification of Sparse Impulse Response

    NASA Astrophysics Data System (ADS)

    Liu, Ligang; Fukumoto, Masahiro; Saiki, Sachio; Zhang, Shiyong

    2009-12-01

    Proportionate adaptive algorithms have been proposed recently to accelerate convergence for the identification of sparse impulse response. When the excitation signal is colored, especially the speech, the convergence performance of proportionate NLMS algorithms demonstrate slow convergence speed. The proportionate affine projection algorithm (PAPA) is expected to solve this problem by using more information in the input signals. However, its steady-state performance is limited by the constant step-size parameter. In this article we propose a variable step-size PAPA by canceling the a posteriori estimation error. This can result in high convergence speed using a large step size when the identification error is large, and can then considerably decrease the steady-state misalignment using a small step size after the adaptive filter has converged. Simulation results show that the proposed approach can greatly improve the steady-state misalignment without sacrificing the fast convergence of PAPA.

  11. Object-Image Correspondence for Algebraic Curves under Projections

    NASA Astrophysics Data System (ADS)

    Burdis, Joseph M.; Kogan, Irina A.; Hong, Hoon

    2013-03-01

    We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence p! roblem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  12. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  13. Reflection symmetry detection using locally affine invariant edge correspondence.

    PubMed

    Wang, Zhaozhong; Tang, Zesheng; Zhang, Xiao

    2015-04-01

    Reflection symmetry detection receives increasing attentions in recent years. The state-of-the-art algorithms mainly use the matching of intensity-based features (such as the SIFT) within a single image to find symmetry axes. This paper proposes a novel approach by establishing the correspondence of locally affine invariant edge-based features, which are superior to the intensity based in the aspects that it is insensitive to illumination variations, and applicable to textureless objects. The locally affine invariance is achieved by simple linear algebra for efficient and robust computations, making the algorithm suitable for detections under object distortions like perspective projection. Commonly used edge detectors and a voting process are, respectively, used before and after the edge description and matching steps to form a complete reflection detection pipeline. Experiments are performed using synthetic and real-world images with both multiple and single reflection symmetry axis. The test results are compared with existing algorithms to validate the proposed method.

  14. An improved ASIFT algorithm for indoor panorama image matching

    NASA Astrophysics Data System (ADS)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  15. Face recognition based on two-dimensional discriminant sparse preserving projection

    NASA Astrophysics Data System (ADS)

    Zhang, Dawei; Zhu, Shanan

    2018-04-01

    In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.

  16. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  17. Performance study of LMS based adaptive algorithms for unknown system identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javed, Shazia; Ahmad, Noor Atinah

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less

  18. An O({radical}nL) primal-dual affine scaling algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Siming

    1994-12-31

    We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.

  19. Fast Estimation of Defect Profiles from the Magnetic Flux Leakage Signal Based on a Multi-Power Affine Projection Algorithm

    PubMed Central

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  20. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    PubMed

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-09-04

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.

  1. Formulating face verification with semidefinite programming.

    PubMed

    Yan, Shuicheng; Liu, Jianzhuang; Tang, Xiaoou; Huang, Thomas S

    2007-11-01

    This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters.

  2. A Centered Projective Algorithm for Linear Programming

    DTIC Science & Technology

    1988-02-01

    zx/l to (PA Karmarkar’s algorithm iterates this procedure. An alternative method, the so-called affine variant (first proposed by Dikin [6] in 1967...trajectories, II. Legendre transform coordinates . central trajectories," manuscripts, to appear in Transactions of the American [6] I.I. Dikin ...34Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Dokladv 8 (1967), 674-675. [7] I.I. Dikin , "On the speed of an

  3. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  4. Practical image registration concerns overcome by the weighted and filtered mutual information metric

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Saber, Eli; Rhody, Harvey; Savakis, Andreas; Raj, Jeffrey

    2012-04-01

    Contemporary research in automated panorama creation utilizes camera calibration or extensive knowledge of camera locations and relations to each other to achieve successful results. Research in image registration attempts to restrict these same camera parameters or apply complex point-matching schemes to overcome the complications found in real-world scenarios. This paper presents a novel automated panorama creation algorithm by developing an affine transformation search based on maximized mutual information (MMI) for region-based registration. Standard MMI techniques have been limited to applications with airborne/satellite imagery or medical images. We show that a novel MMI algorithm can approximate an accurate registration between views of realistic scenes of varying depth distortion. The proposed algorithm has been developed using stationary, color, surveillance video data for a scenario with no a priori camera-to-camera parameters. This algorithm is robust for strict- and nearly-affine-related scenes, while providing a useful approximation for the overlap regions in scenes related by a projective homography or a more complex transformation, allowing for a set of efficient and accurate initial conditions for pixel-based registration.

  5. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.

  6. Image authentication using distributed source coding.

    PubMed

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  7. Robust non-rigid registration algorithm based on local affine registration

    NASA Astrophysics Data System (ADS)

    Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng

    2018-04-01

    Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.

  8. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  9. Parallel Clustering Algorithm for Large-Scale Biological Data Sets

    PubMed Central

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246

  10. Cost Function Network-based Design of Protein-Protein Interactions: predicting changes in binding affinity.

    PubMed

    Viricel, Clément; de Givry, Simon; Schiex, Thomas; Barbe, Sophie

    2018-02-20

    Accurate and economic methods to predict change in protein binding free energy upon mutation are imperative to accelerate the design of proteins for a wide range of applications. Free energy is defined by enthalpic and entropic contributions. Following the recent progresses of Artificial Intelligence-based algorithms for guaranteed NP-hard energy optimization and partition function computation, it becomes possible to quickly compute minimum energy conformations and to reliably estimate the entropic contribution of side-chains in the change of free energy of large protein interfaces. Using guaranteed Cost Function Network algorithms, Rosetta energy functions and Dunbrack's rotamer library, we developed and assessed EasyE and JayZ, two methods for binding affinity estimation that ignore or include conformational entropic contributions on a large benchmark of binding affinity experimental measures. If both approaches outperform most established tools, we observe that side-chain conformational entropy brings little or no improvement on most systems but becomes crucial in some rare cases. as open-source Python/C ++ code at sourcesup.renater.fr/projects/easy-jayz. thomas.schiex@inra.fr and sophie.barbe@insa-toulouse.fr. Supplementary data are available at Bioinformatics online.

  11. Hardware architecture for projective model calculation and false match refining using random sample consensus algorithm

    NASA Astrophysics Data System (ADS)

    Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid

    2016-11-01

    The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.

  12. Superlinear variant of the dual affine scaling algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luz, C.; Cardosa, D.

    1994-12-31

    The affine scaling methods introduced by Dikin are generally considered the most efficient interior point algorithms from a computational point of view. However, it is actually an open question to know whether there is a polynomial affine scaling algorithm. This fact has motivated many investigations efforts and led to several convergence results. This is the case of the recently obtained results by Tsuchiya, Tseng and Luo and Tsuchiya and Muramatsu which, unlike the pioneering Dikin`s convergence result, do not require any non degeneracy assumption. This paper presents a new variant of the dual affine scaling algorithm for Linear Programming that,more » in a finite number of iterations, determines a primal-dual pair of optimal solutions. It is also shown the superlinear convergence of that variant without requiring any non degeneracy assumption.« less

  13. Reinforce: An Ensemble Approach for Inferring PPI Network from AP-MS Data.

    PubMed

    Tian, Bo; Duan, Qiong; Zhao, Can; Teng, Ben; He, Zengyou

    2017-05-17

    Affinity Purification-Mass Spectrometry (AP-MS) is one of the most important technologies for constructing protein-protein interaction (PPI) networks. In this paper, we propose an ensemble method, Reinforce, for inferring PPI network from AP-MS data set. The new algorithm named Reinforce is based on rank aggregation and false discovery rate control. Under the null hypothesis that the interaction scores from different scoring methods are randomly generated, Reinforce follows three steps to integrate multiple ranking results from different algorithms or different data sets. The experimental results show that Reinforce can get more stable and accurate inference results than existing algorithms. The source codes of Reinforce and data sets used in the experiments are available at: https://sourceforge.net/projects/reinforce/.

  14. Fast-match on particle swarm optimization with variant system mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Yuehuang; Fang, Xin; Chen, Jie

    2018-03-01

    Fast-Match is a fast and effective algorithm for approximate template matching under 2D affine transformations, which can match the target with maximum similarity without knowing the target gesture. It depends on the minimum Sum-of-Absolute-Differences (SAD) error to obtain the best affine transformation. The algorithm is widely used in the field of matching images because of its fastness and robustness. In this paper, our approach is to search an approximate affine transformation over Particle Swarm Optimization (PSO) algorithm. We treat each potential transformation as a particle that possesses memory function. Each particle is given a random speed and flows throughout the 2D affine transformation space. To accelerate the algorithm and improve the abilities of seeking the global excellent result, we have introduced the variant system mechanism on this basis. The benefit is that we can avoid matching with huge amount of potential transformations and falling into local optimal condition, so that we can use a few transformations to approximate the optimal solution. The experimental results prove that our method has a faster speed and a higher accuracy performance with smaller affine transformation space.

  15. Low complexity adaptive equalizers for underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Soflaei, Masoumeh; Azmi, Paeiz

    2014-08-01

    Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.

  16. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  17. An algebraic structure of discrete-time biaffine systems

    NASA Technical Reports Server (NTRS)

    Tarn, T.-J.; Nonoyama, S.

    1979-01-01

    New results on the realization of finite-dimensional, discrete-time, internally biaffine systems are presented in this paper. The external behavior of such systems is described by multiaffine functions and the state space is constructed via Nerode equivalence relations. We prove that the state space is an affine space. An algorithm which amounts to choosing a frame for the affine space is presented. Our algorithm reduces in the linear and bilinear case to a generalization of algorithms existing in the literature. Explicit existence criteria for span-canonical realizations as well as an affine isomorphism theorem are given.

  18. emcee: The MCMC Hammer

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Hogg, David W.; Lang, Dustin; Goodman, Jonathan

    2013-03-01

    We introduce a stable, well tested Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by Goodman & Weare (2010). The code is open source and has already been used in several published projects in the astrophysics literature. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and it has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One major advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to ˜N2 for a traditional algorithm in an N-dimensional parameter space. In this document, we describe the algorithm and the details of our implementation. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort. The code is available online at http://dan.iel.fm/emcee under the GNU General Public License v2.

  19. Recursive algorithms for bias and gain nonuniformity correction in infrared videos.

    PubMed

    Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R

    2012-12-01

    Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.

  20. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  1. Motion planning in velocity affine mechanical systems

    NASA Astrophysics Data System (ADS)

    Jakubiak, Janusz; Tchoń, Krzysztof; Magiera, Władysław

    2010-09-01

    We address the motion planning problem in specific mechanical systems whose linear and angular velocities depend affinely on control. The configuration space of these systems encompasses the rotation group, and the motion planning involves the system orientation. Derivation of the motion planning algorithm for velocity affine systems has been inspired by the continuation method. Performance of this algorithm is illustrated with examples of the kinematics of a serial nonholonomic manipulator, the plate-ball kinematics and the attitude control of a rigid body.

  2. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  3. Using color histogram normalization for recovering chromatic illumination-changed images.

    PubMed

    Pei, S C; Tseng, C L; Wu, C C

    2001-11-01

    We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.

  4. Improving the convergence rate in affine registration of PET and SPECT brain images using histogram equalization.

    PubMed

    Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A

    2013-01-01

    A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.

  5. Data depth based clustering analysis

    DOE PAGES

    Jeong, Myeong -Hun; Cai, Yaping; Sullivan, Clair J.; ...

    2016-01-01

    Here, this paper proposes a new algorithm for identifying patterns within data, based on data depth. Such a clustering analysis has an enormous potential to discover previously unknown insights from existing data sets. Many clustering algorithms already exist for this purpose. However, most algorithms are not affine invariant. Therefore, they must operate with different parameters after the data sets are rotated, scaled, or translated. Further, most clustering algorithms, based on Euclidean distance, can be sensitive to noises because they have no global perspective. Parameter selection also significantly affects the clustering results of each algorithm. Unlike many existing clustering algorithms, themore » proposed algorithm, called data depth based clustering analysis (DBCA), is able to detect coherent clusters after the data sets are affine transformed without changing a parameter. It is also robust to noises because using data depth can measure centrality and outlyingness of the underlying data. Further, it can generate relatively stable clusters by varying the parameter. The experimental comparison with the leading state-of-the-art alternatives demonstrates that the proposed algorithm outperforms DBSCAN and HDBSCAN in terms of affine invariance, and exceeds or matches the ro-bustness to noises of DBSCAN or HDBSCAN. The robust-ness to parameter selection is also demonstrated through the case study of clustering twitter data.« less

  6. ECG-gated interventional cardiac reconstruction for non-periodic motion.

    PubMed

    Rohkohl, Christopher; Lauritsch, Günter; Biller, Lisa; Hornegger, Joachim

    2010-01-01

    The 3-D reconstruction of cardiac vasculature using C-arm CT is an active and challenging field of research. In interventional environments patients often do have arrhythmic heart signals or cannot hold breath during the complete data acquisition. This important group of patients cannot be reconstructed with current approaches that do strongly depend on a high degree of cardiac motion periodicity for working properly. In a last year's MICCAI contribution a first algorithm was presented that is able to estimate non-periodic 4-D motion patterns. However, to some degree that algorithm still depends on periodicity, as it requires a prior image which is obtained using a simple ECG-gated reconstruction. In this work we aim to provide a solution to this problem by developing a motion compensated ECG-gating algorithm. It is built upon a 4-D time-continuous affine motion model which is capable of compactly describing highly non-periodic motion patterns. A stochastic optimization scheme is derived which minimizes the error between the measured projection data and the forward projection of the motion compensated reconstruction. For evaluation, the algorithm is applied to 5 datasets of the left coronary arteries of patients that have ignored the breath hold command and/or had arrhythmic heart signals during the data acquisition. By applying the developed algorithm the average visibility of the vessel segments could be increased by 27%. The results show that the proposed algorithm provides excellent reconstruction quality in cases where classical approaches fail. The algorithm is highly parallelizable and a clinically feasible runtime of under 4 minutes is achieved using modern graphics card hardware.

  7. Recognition of genetically modified product based on affinity propagation clustering and terahertz spectroscopy

    NASA Astrophysics Data System (ADS)

    Liu, Jianjun; Kan, Jianquan

    2018-04-01

    In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.

  8. Thermal-depth matching in dynamic scene based on affine projection and feature registration

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Jia, Tong; Wu, Chengdong; Li, Yongqiang

    2018-03-01

    This paper aims to study the construction of 3D temperature distribution reconstruction system based on depth and thermal infrared information. Initially, a traditional calibration method cannot be directly used, because the depth and thermal infrared camera is not sensitive to the color calibration board. Therefore, this paper aims to design a depth and thermal infrared camera calibration board to complete the calibration of the depth and thermal infrared camera. Meanwhile a local feature descriptors in thermal and depth images is proposed. The belief propagation matching algorithm is also investigated based on the space affine transformation matching and local feature matching. The 3D temperature distribution model is built based on the matching of 3D point cloud and 2D thermal infrared information. Experimental results show that the method can accurately construct the 3D temperature distribution model, and has strong robustness.

  9. Projective invariant biplanar registration of a compact modular orthopaedic robot.

    PubMed

    Luan, Sheng; Sun, Lei; Hu, Lei; Hao, Aimin; Li, Changsheng; Tang, Peifu; Zhang, Lihai; Du, Hailong

    2014-01-01

    This paper presents a compact orthopedic robot designed with modular concept. The layout of the modular configuration is adaptive to various conditions such as surgical workspace and targeting path. A biplanar algorithm is adopted for the mapping from the fluoroscopic image to the robot, while the former affine based method is satisfactory only when the projection rays are basically perpendicular to the reference coordinate planes. This paper introduces the area cross-ratio as a projective invariant to improve the registration accuracy for non-orthogonal orientations, so that the robotic system could be applied to more orthopedic procedures under various C-Arm orientation conditions. The system configurations for femoral neck screw and sacroiliac screw fixation are presented. The accuracy of the robotic system and its efficacy for the two typical applications are validated by experiments.

  10. Demonstration of accuracy and clinical versatility of mutual information for automatic multimodality image fusion using affine and thin-plate spline warped geometric deformations.

    PubMed

    Meyer, C R; Boes, J L; Kim, B; Bland, P H; Zasadny, K R; Kison, P V; Koral, K; Frey, K A; Wahl, R L

    1997-04-01

    This paper applies and evaluates an automatic mutual information-based registration algorithm across a broad spectrum of multimodal volume data sets. The algorithm requires little or no pre-processing, minimal user input and easily implements either affine, i.e. linear or thin-plate spline (TPS) warped registrations. We have evaluated the algorithm in phantom studies as well as in selected cases where few other algorithms could perform as well, if at all, to demonstrate the value of this new method. Pairs of multimodal gray-scale volume data sets were registered by iteratively changing registration parameters to maximize mutual information. Quantitative registration errors were assessed in registrations of a thorax phantom using PET/CT and in the National Library of Medicine's Visible Male using MRI T2-/T1-weighted acquisitions. Registrations of diverse clinical data sets were demonstrated including rotate-translate mapping of PET/MRI brain scans with significant missing data, full affine mapping of thoracic PET/CT and rotate-translate mapping of abdominal SPECT/CT. A five-point thin-plate spline (TPS) warped registration of thoracic PET/CT is also demonstrated. The registration algorithm converged in times ranging between 3.5 and 31 min for affine clinical registrations and 57 min for TPS warping. Mean error vector lengths for rotate-translate registrations were measured to be subvoxel in phantoms. More importantly the rotate-translate algorithm performs well even with missing data. The demonstrated clinical fusions are qualitatively excellent at all levels. We conclude that such automatic, rapid, robust algorithms significantly increase the likelihood that multimodality registrations will be routinely used to aid clinical diagnoses and post-therapeutic assessment in the near future.

  11. Enhancing Community Detection By Affinity-based Edge Weighting Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Andy; Sanders, Geoffrey; Henson, Van

    Community detection refers to an important graph analytics problem of finding a set of densely-connected subgraphs in a graph and has gained a great deal of interest recently. The performance of current community detection algorithms is limited by an inherent constraint of unweighted graphs that offer very little information on their internal community structures. In this paper, we propose a new scheme to address this issue that weights the edges in a given graph based on recently proposed vertex affinity. The vertex affinity quantifies the proximity between two vertices in terms of their clustering strength, and therefore, it is idealmore » for graph analytics applications such as community detection. We also demonstrate that the affinity-based edge weighting scheme can improve the performance of community detection algorithms significantly.« less

  12. A Novel Center Star Multiple Sequence Alignment Algorithm Based on Affine Gap Penalty and K-Band

    NASA Astrophysics Data System (ADS)

    Zou, Quan; Shan, Xiao; Jiang, Yi

    Multiple sequence alignment is one of the most important topics in computational biology, but it cannot deal with the large data so far. As the development of copy-number variant(CNV) and Single Nucleotide Polymorphisms(SNP) research, many researchers want to align numbers of similar sequences for detecting CNV and SNP. In this paper, we propose a novel multiple sequence alignment algorithm based on affine gap penalty and k-band. It can align more quickly and accurately, that will be helpful for mining CNV and SNP. Experiments prove the performance of our algorithm.

  13. Linearized motion estimation for articulated planes.

    PubMed

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  14. A Novel User Classification Method for Femtocell Network by Using Affinity Propagation Algorithm and Artificial Neural Network

    PubMed Central

    Ahmed, Afaz Uddin; Tariqul Islam, Mohammad; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation. PMID:25133214

  15. A novel user classification method for femtocell network by using affinity propagation algorithm and artificial neural network.

    PubMed

    Ahmed, Afaz Uddin; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Arshad, Haslina

    2014-01-01

    An artificial neural network (ANN) and affinity propagation (AP) algorithm based user categorization technique is presented. The proposed algorithm is designed for closed access femtocell network. ANN is used for user classification process and AP algorithm is used to optimize the ANN training process. AP selects the best possible training samples for faster ANN training cycle. The users are distinguished by using the difference of received signal strength in a multielement femtocell device. A previously developed directive microstrip antenna is used to configure the femtocell device. Simulation results show that, for a particular house pattern, the categorization technique without AP algorithm takes 5 indoor users and 10 outdoor users to attain an error-free operation. While integrating AP algorithm with ANN, the system takes 60% less training samples reducing the training time up to 50%. This procedure makes the femtocell more effective for closed access operation.

  16. Wireless rake-receiver using adaptive filter with a family of partial update algorithms in noise cancellation applications

    NASA Astrophysics Data System (ADS)

    Fayadh, Rashid A.; Malek, F.; Fadhil, Hilal A.; Aldhaibani, Jaafar A.; Salman, M. K.; Abdullah, Farah Salwani

    2015-05-01

    For high data rate propagation in wireless ultra-wideband (UWB) communication systems, the inter-symbol interference (ISI), multiple-access interference (MAI), and multiple-users interference (MUI) are influencing the performance of the wireless systems. In this paper, the rake-receiver was presented with the spread signal by direct sequence spread spectrum (DS-SS) technique. The adaptive rake-receiver structure was shown with adjusting the receiver tap weights using least mean squares (LMS), normalized least mean squares (NLMS), and affine projection algorithms (APA) to support the weak signals by noise cancellation and mitigate the interferences. To minimize the data convergence speed and to reduce the computational complexity by the previous algorithms, a well-known approach of partial-updates (PU) adaptive filters were employed with algorithms, such as sequential-partial, periodic-partial, M-max-partial, and selective-partial updates (SPU) in the proposed system. The simulation results of bit error rate (BER) versus signal-to-noise ratio (SNR) are illustrated to show the performance of partial-update algorithms that have nearly comparable performance with the full update adaptive filters. Furthermore, the SPU-partial has closed performance to the full-NLMS and full-APA while the M-max-partial has closed performance to the full-LMS updates algorithms.

  17. Inferring monopartite projections of bipartite networks: an entropy-based approach

    NASA Astrophysics Data System (ADS)

    Saracco, Fabio; Straka, Mika J.; Di Clemente, Riccardo; Gabrielli, Andrea; Caldarelli, Guido; Squartini, Tiziano

    2017-05-01

    Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p-values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users’ ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.

  18. Robust feature matching via support-line voting and affine-invariant ratios

    NASA Astrophysics Data System (ADS)

    Li, Jiayuan; Hu, Qingwu; Ai, Mingyao; Zhong, Ruofei

    2017-10-01

    Robust image matching is crucial for many applications of remote sensing and photogrammetry, such as image fusion, image registration, and change detection. In this paper, we propose a robust feature matching method based on support-line voting and affine-invariant ratios. We first use popular feature matching algorithms, such as SIFT, to obtain a set of initial matches. A support-line descriptor based on multiple adaptive binning gradient histograms is subsequently applied in the support-line voting stage to filter outliers. In addition, we use affine-invariant ratios computed by a two-line structure to refine the matching results and estimate the local affine transformation. The local affine model is more robust to distortions caused by elevation differences than the global affine transformation, especially for high-resolution remote sensing images and UAV images. Thus, the proposed method is suitable for both rigid and non-rigid image matching problems. Finally, we extract as many high-precision correspondences as possible based on the local affine extension and build a grid-wise affine model for remote sensing image registration. We compare the proposed method with six state-of-the-art algorithms on several data sets and show that our method significantly outperforms the other methods. The proposed method achieves 94.46% average precision on 15 challenging remote sensing image pairs, while the second-best method, RANSAC, only achieves 70.3%. In addition, the number of detected correct matches of the proposed method is approximately four times the number of initial SIFT matches.

  19. A Space Affine Matching Approach to fMRI Time Series Analysis.

    PubMed

    Chen, Liang; Zhang, Weishi; Liu, Hongbo; Feng, Shigang; Chen, C L Philip; Wang, Huili

    2016-07-01

    For fMRI time series analysis, an important challenge is to overcome the potential delay between hemodynamic response signal and cognitive stimuli signal, namely the same frequency but different phase (SFDP) problem. In this paper, a novel space affine matching feature is presented by introducing the time domain and frequency domain features. The time domain feature is used to discern different stimuli, while the frequency domain feature to eliminate the delay. And then we propose a space affine matching (SAM) algorithm to match fMRI time series by our affine feature, in which a normal vector is estimated using gradient descent to explore the time series matching optimally. The experimental results illustrate that the SAM algorithm is insensitive to the delay between the hemodynamic response signal and the cognitive stimuli signal. Our approach significantly outperforms GLM method while there exists the delay. The approach can help us solve the SFDP problem in fMRI time series matching and thus of great promise to reveal brain dynamics.

  20. MutaBind estimates and interprets the effects of sequence variants on protein-protein interactions.

    PubMed

    Li, Minghui; Simonetti, Franco L; Goncearenco, Alexander; Panchenko, Anna R

    2016-07-08

    Proteins engage in highly selective interactions with their macromolecular partners. Sequence variants that alter protein binding affinity may cause significant perturbations or complete abolishment of function, potentially leading to diseases. There exists a persistent need to develop a mechanistic understanding of impacts of variants on proteins. To address this need we introduce a new computational method MutaBind to evaluate the effects of sequence variants and disease mutations on protein interactions and calculate the quantitative changes in binding affinity. The MutaBind method uses molecular mechanics force fields, statistical potentials and fast side-chain optimization algorithms. The MutaBind server maps mutations on a structural protein complex, calculates the associated changes in binding affinity, determines the deleterious effect of a mutation, estimates the confidence of this prediction and produces a mutant structural model for download. MutaBind can be applied to a large number of problems, including determination of potential driver mutations in cancer and other diseases, elucidation of the effects of sequence variants on protein fitness in evolution and protein design. MutaBind is available at http://www.ncbi.nlm.nih.gov/projects/mutabind/. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  1. Application of affinity propagation algorithm based on manifold distance for transformer PD pattern recognition

    NASA Astrophysics Data System (ADS)

    Wei, B. G.; Huo, K. X.; Yao, Z. F.; Lou, J.; Li, X. Y.

    2018-03-01

    It is one of the difficult problems encountered in the research of condition maintenance technology of transformers to recognize partial discharge (PD) pattern. According to the main physical characteristics of PD, three models of oil-paper insulation defects were set up in laboratory to study the PD of transformers, and phase resolved partial discharge (PRPD) was constructed. By using least square method, the grey-scale images of PRPD were constructed and features of each grey-scale image were 28 box dimensions and 28 information dimensions. Affinity propagation algorithm based on manifold distance (AP-MD) for transformers PD pattern recognition was established, and the data of box dimension and information dimension were clustered based on AP-MD. Study shows that clustering result of AP-MD is better than the results of affinity propagation (AP), k-means and fuzzy c-means algorithm (FCM). By choosing different k values of k-nearest neighbor, we find clustering accuracy of AP-MD falls when k value is larger or smaller, and the optimal k value depends on sample size.

  2. PANDA: Protein function prediction using domain architecture and affinity propagation.

    PubMed

    Wang, Zheng; Zhao, Chenguang; Wang, Yiheng; Sun, Zheng; Wang, Nan

    2018-02-22

    We developed PANDA (Propagation of Affinity and Domain Architecture) to predict protein functions in the format of Gene Ontology (GO) terms. PANDA at first executes profile-profile alignment algorithm to search against PfamA, KOG, COG, and SwissProt databases, and then launches PSI-BLAST against UniProt for homologue search. PANDA integrates a domain architecture inference algorithm based on the Bayesian statistics that calculates the probability of having a GO term. All the candidate GO terms are pooled and filtered based on Z-score. After that, the remaining GO terms are clustered using an affinity propagation algorithm based on the GO directed acyclic graph, followed by a second round of filtering on the clusters of GO terms. We benchmarked the performance of all the baseline predictors PANDA integrates and also for every pooling and filtering step of PANDA. It can be found that PANDA achieves better performances in terms of area under the curve for precision and recall compared to the baseline predictors. PANDA can be accessed from http://dna.cs.miami.edu/PANDA/ .

  3. Multi-exemplar affinity propagation.

    PubMed

    Wang, Chang-Dong; Lai, Jian-Huang; Suen, Ching Y; Zhu, Jun-Yong

    2013-09-01

    The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP-hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAP's significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.

  4. Affinity+: Semi-Structured Brainstorming on Large Displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burtner, Edwin R.; May, Richard A.; Scarberry, Randall E.

    2013-04-27

    Affinity diagraming is a powerful method for encouraging and capturing lateral thinking in a group environment. The Affinity+ Concept was designed to improve the collaborative brainstorm process through the use of large display surfaces in conjunction with mobile devices like smart phones and tablets. The system works by capturing the ideas digitally and allowing users to sort and group them on a large touch screen manually. Additionally, Affinity+ incorporates theme detection, topic clustering, and other processing algorithms that help bring structured analytic techniques to the process without requiring explicit leadership roles and other overhead typically involved in these activities.

  5. An analytical fuzzy-based approach to ?-gain optimal control of input-affine nonlinear systems using Newton-type algorithm

    NASA Astrophysics Data System (ADS)

    Milic, Vladimir; Kasac, Josip; Novakovic, Branko

    2015-10-01

    This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.

  6. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  7. One improved LSB steganography algorithm

    NASA Astrophysics Data System (ADS)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  8. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning.

    PubMed

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-10-30

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms.

  9. A Mixed Approach to Similarity Metric Selection in Affinity Propagation-Based WiFi Fingerprinting Indoor Positioning

    PubMed Central

    Caso, Giuseppe; de Nardis, Luca; di Benedetto, Maria-Gabriella

    2015-01-01

    The weighted k-nearest neighbors (WkNN) algorithm is by far the most popular choice in the design of fingerprinting indoor positioning systems based on WiFi received signal strength (RSS). WkNN estimates the position of a target device by selecting k reference points (RPs) based on the similarity of their fingerprints with the measured RSS values. The position of the target device is then obtained as a weighted sum of the positions of the k RPs. Two-step WkNN positioning algorithms were recently proposed, in which RPs are divided into clusters using the affinity propagation clustering algorithm, and one representative for each cluster is selected. Only cluster representatives are then considered during the position estimation, leading to a significant computational complexity reduction compared to traditional, flat WkNN. Flat and two-step WkNN share the issue of properly selecting the similarity metric so as to guarantee good positioning accuracy: in two-step WkNN, in particular, the metric impacts three different steps in the position estimation, that is cluster formation, cluster selection and RP selection and weighting. So far, however, the only similarity metric considered in the literature was the one proposed in the original formulation of the affinity propagation algorithm. This paper fills this gap by comparing different metrics and, based on this comparison, proposes a novel mixed approach in which different metrics are adopted in the different steps of the position estimation procedure. The analysis is supported by an extensive experimental campaign carried out in a multi-floor 3D indoor positioning testbed. The impact of similarity metrics and their combinations on the structure and size of the resulting clusters, 3D positioning accuracy and computational complexity are investigated. Results show that the adoption of metrics different from the one proposed in the original affinity propagation algorithm and, in particular, the combination of different metrics can significantly improve the positioning accuracy while preserving the efficiency in computational complexity typical of two-step algorithms. PMID:26528984

  10. Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior

    PubMed Central

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741

  11. Identification of piecewise affine systems based on fuzzy PCA-guided robust clustering technique

    NASA Astrophysics Data System (ADS)

    Khanmirza, Esmaeel; Nazarahari, Milad; Mousavi, Alireza

    2016-12-01

    Hybrid systems are a class of dynamical systems whose behaviors are based on the interaction between discrete and continuous dynamical behaviors. Since a general method for the analysis of hybrid systems is not available, some researchers have focused on specific types of hybrid systems. Piecewise affine (PWA) systems are one of the subsets of hybrid systems. The identification of PWA systems includes the estimation of the parameters of affine subsystems and the coefficients of the hyperplanes defining the partition of the state-input domain. In this paper, we have proposed a PWA identification approach based on a modified clustering technique. By using a fuzzy PCA-guided robust k-means clustering algorithm along with neighborhood outlier detection, the two main drawbacks of the well-known clustering algorithms, i.e., the poor initialization and the presence of outliers, are eliminated. Furthermore, this modified clustering technique enables us to determine the number of subsystems without any prior knowledge about system. In addition, applying the structure of the state-input domain, that is, considering the time sequence of input-output pairs, provides a more efficient clustering algorithm, which is the other novelty of this work. Finally, the proposed algorithm has been evaluated by parameter identification of an IGV servo actuator. Simulation together with experiment analysis has proved the effectiveness of the proposed method.

  12. A polynomial primal-dual Dikin-type algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, B.; Roos, R.; Terlaky, T.

    1994-12-31

    We present a new primal-dual affine scaling method for linear programming. The search direction is obtained by using Dikin`s original idea: minimize the objective function (which is the duality gap in a primal-dual algorithm) over a suitable ellipsoid. The search direction has no obvious relationship with the directions proposed in the literature so far. It guarantees a significant decrease in the duality gap in each iteration, and at the same time drives the iterates to the central path. The method admits a polynomial complexity bound that is better than the one for Monteiro et al.`s original primal-dual affine scaling method.

  13. Novel near-infrared spectrum analysis tool: Synergy adaptive moving window model based on immune clone algorithm.

    PubMed

    Wang, Shenghao; Zhang, Yuyan; Cao, Fuyi; Pei, Zhenying; Gao, Xuewei; Zhang, Xu; Zhao, Yong

    2018-02-13

    This paper presents a novel spectrum analysis tool named synergy adaptive moving window modeling based on immune clone algorithm (SA-MWM-ICA) considering the tedious and inconvenient labor involved in the selection of pre-processing methods and spectral variables by prior experience. In this work, immune clone algorithm is first introduced into the spectrum analysis field as a new optimization strategy, covering the shortage of the relative traditional methods. Based on the working principle of the human immune system, the performance of the quantitative model is regarded as antigen, and a special vector corresponding to the above mentioned antigen is regarded as antibody. The antibody contains a pre-processing method optimization region which is created by 11 decimal digits, and a spectrum variable optimization region which is formed by some moving windows with changeable width and position. A set of original antibodies are created by modeling with this algorithm. After calculating the affinity of these antibodies, those with high affinity will be selected to clone. The regulation for cloning is that the higher the affinity, the more copies will be. In the next step, another import operation named hyper-mutation is applied to the antibodies after cloning. Moreover, the regulation for hyper-mutation is that the lower the affinity, the more possibility will be. Several antibodies with high affinity will be created on the basis of these steps. Groups of simulated dataset, gasoline near-infrared spectra dataset, and soil near-infrared spectra dataset are employed to verify and illustrate the performance of SA-MWM-ICA. Analysis results show that the performance of the quantitative models adopted by SA-MWM-ICA are better especially for structures with relatively complex spectra than traditional models such as partial least squares (PLS), moving window PLS (MWPLS), genetic algorithm PLS (GAPLS), and pretreatment method classification and adjustable parameter changeable size moving window PLS (CA-CSMWPLS). The selected pre-processing methods and spectrum variables are easily explained. The proposed method will converge in few generations and can be used not only for near-infrared spectroscopy analysis but also for other similar spectral analysis, such as infrared spectroscopy. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. An extended affinity propagation clustering method based on different data density types.

    PubMed

    Zhao, XiuLi; Xu, WeiXiang

    2015-01-01

    Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.

  15. Host-Guest Complexes with Protein-Ligand-Like Affinities: Computational Analysis and Design

    PubMed Central

    Moghaddam, Sarvin; Inoue, Yoshihisa

    2009-01-01

    It has recently been discovered that guests combining a nonpolar core with cationic substituents bind cucurbit[7]uril (CB[7]) in water with ultra-high affinities. The present study uses the Mining Minima algorithm to study the physics of these extraordinary associations and to computationally test a new series of CB[7] ligands designed to bind with similarly high affinity. The calculations reproduce key experimental observations regarding the affinities of ferrocene-based guests with CB[7] and β-cyclodextrin and provide a coherent view of the roles of electrostatics and configurational entropy as determinants of affinity in these systems. The newly designed series of compounds is based on a bicyclo[2.2.2]octane core, which is similar in size and polarity to the ferrocene core of the existing series. Mining Minima predicts that these new compounds will, like the ferrocenes, bind CB[7] with extremely high affinities. PMID:19133781

  16. Using physics-based pose predictions and free energy perturbation calculations to predict binding poses and relative binding affinities for FXR ligands in the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Athanasiou, Christina; Vasilakaki, Sofia; Dellis, Dimitris; Cournia, Zoe

    2018-01-01

    Computer-aided drug design has become an integral part of drug discovery and development in the pharmaceutical and biotechnology industry, and is nowadays extensively used in the lead identification and lead optimization phases. The drug design data resource (D3R) organizes challenges against blinded experimental data to prospectively test computational methodologies as an opportunity for improved methods and algorithms to emerge. We participated in Grand Challenge 2 to predict the crystallographic poses of 36 Farnesoid X Receptor (FXR)-bound ligands and the relative binding affinities for two designated subsets of 18 and 15 FXR-bound ligands. Here, we present our methodology for pose and affinity predictions and its evaluation after the release of the experimental data. For predicting the crystallographic poses, we used docking and physics-based pose prediction methods guided by the binding poses of native ligands. For FXR ligands with known chemotypes in the PDB, we accurately predicted their binding modes, while for those with unknown chemotypes the predictions were more challenging. Our group ranked #1st (based on the median RMSD) out of 46 groups, which submitted complete entries for the binding pose prediction challenge. For the relative binding affinity prediction challenge, we performed free energy perturbation (FEP) calculations coupled with molecular dynamics (MD) simulations. FEP/MD calculations displayed a high success rate in identifying compounds with better or worse binding affinity than the reference (parent) compound. Our studies suggest that when ligands with chemical precedent are available in the literature, binding pose predictions using docking and physics-based methods are reliable; however, predictions are challenging for ligands with completely unknown chemotypes. We also show that FEP/MD calculations hold predictive value and can nowadays be used in a high throughput mode in a lead optimization project provided that crystal structures of sufficiently high quality are available.

  17. CLUSTERING OF INTERICTAL SPIKES BY DYNAMIC TIME WARPING AND AFFINITY PROPAGATION

    PubMed Central

    Thomas, John; Jin, Jing; Dauwels, Justin; Cash, Sydney S.; Westover, M. Brandon

    2018-01-01

    Epilepsy is often associated with the presence of spikes in electroencephalograms (EEGs). The spike waveforms vary vastly among epilepsy patients, and also for the same patient across time. In order to develop semi-automated and automated methods for detecting spikes, it is crucial to obtain a better understanding of the various spike shapes. In this paper, we develop several approaches to extract exemplars of spikes. We generate spike exemplars by applying clustering algorithms to a database of spikes from 12 patients. As similarity measures for clustering, we consider the Euclidean distance and Dynamic Time Warping (DTW). We assess two clustering algorithms, namely, K-means clustering and affinity propagation. The clustering methods are compared based on the mean squared error, and the similarity measures are assessed based on the number of generated spike clusters. Affinity propagation with DTW is shown to be the best combination for clustering epileptic spikes, since it generates fewer spike templates and does not require to pre-specify the number of spike templates. PMID:29527130

  18. A Clustering Algorithm for Ecological Stream Segment Identification from Spatially Extensive Digital Databases

    NASA Astrophysics Data System (ADS)

    Brenden, T. O.; Clark, R. D.; Wiley, M. J.; Seelbach, P. W.; Wang, L.

    2005-05-01

    Remote sensing and geographic information systems have made it possible to attribute variables for streams at increasingly detailed resolutions (e.g., individual river reaches). Nevertheless, management decisions still must be made at large scales because land and stream managers typically lack sufficient resources to manage on an individual reach basis. Managers thus require a method for identifying stream management units that are ecologically similar and that can be expected to respond similarly to management decisions. We have developed a spatially-constrained clustering algorithm that can merge neighboring river reaches with similar ecological characteristics into larger management units. The clustering algorithm is based on the Cluster Affinity Search Technique (CAST), which was developed for clustering gene expression data. Inputs to the clustering algorithm are the neighbor relationships of the reaches that comprise the digital river network, the ecological attributes of the reaches, and an affinity value, which identifies the minimum similarity for merging river reaches. In this presentation, we describe the clustering algorithm in greater detail and contrast its use with other methods (expert opinion, classification approach, regular clustering) for identifying management units using several Michigan watersheds as a backdrop.

  19. The Black Teacher Project: How Racial Affinity Professional Development Sustains Black Teachers

    ERIC Educational Resources Information Center

    Mosely, Micia

    2018-01-01

    The Black Teacher Project (BTP) is an organization that supports, develops and sustains Black teachers for schools in the United States. The organization is building a Black teaching force that reflects the diversity and excellence of Black people in the United States. In our pilot year, BTP offered racial affinity-based professional development…

  20. Affine.m—Mathematica package for computations in representation theory of finite-dimensional and affine Lie algebras

    NASA Astrophysics Data System (ADS)

    Nazarov, Anton

    2012-11-01

    In this paper we present Affine.m-a program for computations in representation theory of finite-dimensional and affine Lie algebras and describe implemented algorithms. The algorithms are based on the properties of weights and Weyl symmetry. Computation of weight multiplicities in irreducible and Verma modules, branching of representations and tensor product decomposition are the most important problems for us. These problems have numerous applications in physics and we provide some examples of these applications. The program is implemented in the popular computer algebra system Mathematica and works with finite-dimensional and affine Lie algebras. Catalogue identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENB_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24 844 No. of bytes in distributed program, including test data, etc.: 1 045 908 Distribution format: tar.gz Programming language: Mathematica. Computer: i386-i686, x86_64. Operating system: Linux, Windows, Mac OS, Solaris. RAM: 5-500 Mb Classification: 4.2, 5. Nature of problem: Representation theory of finite-dimensional Lie algebras has many applications in different branches of physics, including elementary particle physics, molecular physics, nuclear physics. Representations of affine Lie algebras appear in string theories and two-dimensional conformal field theory used for the description of critical phenomena in two-dimensional systems. Also Lie symmetries play a major role in a study of quantum integrable systems. Solution method: We work with weights and roots of finite-dimensional and affine Lie algebras and use Weyl symmetry extensively. Central problems which are the computations of weight multiplicities, branching and fusion coefficients are solved using one general recurrent algorithm based on generalization of Weyl character formula. We also offer alternative implementation based on the Freudenthal multiplicity formula which can be faster in some cases. Restrictions: Computational complexity grows fast with the rank of an algebra, so computations for algebras of ranks greater than 8 are not practical. Unusual features: We offer the possibility of using a traditional mathematical notation for the objects in representation theory of Lie algebras in computations if Affine.m is used in the Mathematica notebook interface. Running time: From seconds to days depending on the rank of the algebra and the complexity of the representation.

  1. SU-E-J-112: Intensity-Based Pulmonary Image Registration: An Evaluation Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F; Meyer, J; Sandison, G

    2015-06-15

    Purpose: Accurate alignment of thoracic CT images is essential for dose tracking and to safely implement adaptive radiotherapy in lung cancers. At the same time it is challenging given the highly elastic nature of lung tissue deformations. The objective of this study was to assess the performances of three state-of-art intensity-based algorithms in terms of their ability to register thoracic CT images subject to affine, barrel, and sinusoid transformation. Methods: Intensity similarity measures of the evaluated algorithms contained sum-of-squared difference (SSD), local mutual information (LMI), and residual complexity (RC). Five thoracic CT scans obtained from the EMPIRE10 challenge database weremore » included and served as reference images. Each CT dataset was distorted by realistic affine, barrel, and sinusoid transformations. Registration performances of the three algorithms were evaluated for each distortion type in terms of intensity root mean square error (IRMSE) between the reference and registered images in the lung regions. Results: For affine distortions, the three algorithms differed significantly in registration of thoracic images both visually and nominally in terms of IRMSE with a mean of 0.011 for SSD, 0.039 for RC, and 0.026 for LMI (p<0.01; Kruskal-Wallis test). For barrel distortion, the three algorithms showed nominally no significant difference in terms of IRMSE with a mean of 0.026 for SSD, 0.086 for RC, and 0.054 for LMI (p=0.16) . A significant difference was seen for sinusoid distorted thoracic CT data with mean lung IRMSE of 0.039 for SSD, 0.092 for RC, and 0.035 for LMI (p=0.02). Conclusion: Pulmonary deformations might vary to a large extent in nature in a daily clinical setting due to factors ranging from anatomy variations to respiratory motion to image quality. It can be appreciated from the results of the present study that the suitability of application of a particular algorithm for pulmonary image registration is deformation-dependent.« less

  2. Exploiting structure: Introduction and motivation

    NASA Technical Reports Server (NTRS)

    Xu, Zhong Ling

    1993-01-01

    Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.

  3. Observation of positive and small electron affinity of Si-doped AlN films grown by metalorganic chemical vapor deposition on n-type 6H-SiC

    NASA Astrophysics Data System (ADS)

    Feng, Liang; Ping, Chen; De-Gang, Zhao; De-Sheng, Jiang; Zhi-Juan, Zhao; Zong-Shun, Liu; Jian-Jun, Zhu; Jing, Yang; Wei, Liu; Xiao-Guang, He; Xiao-Jing, Li; Xiang, Li; Shuang-Tao, Liu; Hui, Yang; Li-Qun, Zhang; Jian-Ping, Liu; Yuan-Tao, Zhang; Guo-Tong, Du

    2016-05-01

    We have investigated the electron affinity of Si-doped AlN films (N Si = 1.0 × 1018-1.0 × 1019 cm-3) with thicknesses of 50, 200, and 400 nm, synthesized by metalorganic chemical vapor deposition (MOCVD) under low pressure on the n-type (001)6H-SiC substrates. The positive and small electron affinity of AlN films was observed through the ultraviolet photoelectron spectroscopy (UPS) analysis, where an increase in electron affinity appears with the thickness of AlN films increasing, i.e., 0.36 eV for the 50-nm-thick one, 0.58 eV for the 200-nm-thick one, and 0.97 eV for the 400-nm-thick one. Accompanying the x-ray photoelectron spectroscopy (XPS) analysis on the surface contaminations, it suggests that the difference of electron affinity between our three samples may result from the discrepancy of surface impurity contaminations. Project supported by the National Natural Science Foundation of China (Grant Nos. 61574135, 61574134, 61474142, 61474110, 61377020, 61376089, 61223005, and 61321063), the One Hundred Person Project of the Chinese Academy of Sciences, and the Basic Research Project of Jiangsu Province, China (Grant No. BK20130362).

  4. Revisiting negative selection algorithms.

    PubMed

    Ji, Zhou; Dasgupta, Dipankar

    2007-01-01

    This paper reviews the progress of negative selection algorithms, an anomaly/change detection approach in Artificial Immune Systems (AIS). Following its initial model, we try to identify the fundamental characteristics of this family of algorithms and summarize their diversities. There exist various elements in this method, including data representation, coverage estimate, affinity measure, and matching rules, which are discussed for different variations. The various negative selection algorithms are categorized by different criteria as well. The relationship and possible combinations with other AIS or other machine learning methods are discussed. Prospective development and applicability of negative selection algorithms and their influence on related areas are then speculated based on the discussion.

  5. A Build-Up Interior Method for Linear Programming: Affine Scaling Form

    DTIC Science & Technology

    1990-02-01

    initiating a major iteration imply convergence in a finite number of iterations. Each iteration t of the Dikin algorithm starts with an interior dual...this variant with the affine scaling method of Dikin [5] (in dual form). We have also looked into the analogous variant for the related Karmarkar’s...4] G. B. Dantzig, Linear Programming and Extensions (Princeton University Press, Princeton, NJ, 1963). [5] I. I. Dikin , "Iterative solution of

  6. Architecture of a minimal signaling pathway explains the T-cell response to a 1 million-fold variation in antigen affinity and dose

    PubMed Central

    Lever, Melissa; Lim, Hong-Sheng; Kruger, Philipp; Nguyen, John; Trendel, Nicola; Abu-Shah, Enas; Maini, Philip Kumar; van der Merwe, Philip Anton

    2016-01-01

    T cells must respond differently to antigens of varying affinity presented at different doses. Previous attempts to map peptide MHC (pMHC) affinity onto T-cell responses have produced inconsistent patterns of responses, preventing formulations of canonical models of T-cell signaling. Here, a systematic analysis of T-cell responses to 1 million-fold variations in both pMHC affinity and dose produced bell-shaped dose–response curves and different optimal pMHC affinities at different pMHC doses. Using sequential model rejection/identification algorithms, we identified a unique, minimal model of cellular signaling incorporating kinetic proofreading with limited signaling coupled to an incoherent feed-forward loop (KPL-IFF) that reproduces these observations. We show that the KPL-IFF model correctly predicts the T-cell response to antigen copresentation. Our work offers a general approach for studying cellular signaling that does not require full details of biochemical pathways. PMID:27702900

  7. Architecture of a minimal signaling pathway explains the T-cell response to a 1 million-fold variation in antigen affinity and dose.

    PubMed

    Lever, Melissa; Lim, Hong-Sheng; Kruger, Philipp; Nguyen, John; Trendel, Nicola; Abu-Shah, Enas; Maini, Philip Kumar; van der Merwe, Philip Anton; Dushek, Omer

    2016-10-25

    T cells must respond differently to antigens of varying affinity presented at different doses. Previous attempts to map peptide MHC (pMHC) affinity onto T-cell responses have produced inconsistent patterns of responses, preventing formulations of canonical models of T-cell signaling. Here, a systematic analysis of T-cell responses to 1 million-fold variations in both pMHC affinity and dose produced bell-shaped dose-response curves and different optimal pMHC affinities at different pMHC doses. Using sequential model rejection/identification algorithms, we identified a unique, minimal model of cellular signaling incorporating kinetic proofreading with limited signaling coupled to an incoherent feed-forward loop (KPL-IFF) that reproduces these observations. We show that the KPL-IFF model correctly predicts the T-cell response to antigen copresentation. Our work offers a general approach for studying cellular signaling that does not require full details of biochemical pathways.

  8. Placental fetal stem segmentation in a sequence of histology images

    NASA Astrophysics Data System (ADS)

    Athavale, Prashant; Vese, Luminita A.

    2012-02-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an initial guess to obtain segmentation in the rest of the images in the sequence. This constitutes an important step in the extraction and understanding of the fetal stem vasculature.

  9. XDATA

    DTIC Science & Technology

    2017-05-01

    Parallelizing PINT The main focus of our research into the parallelization of the PINT algorithm has been to find appropriately scalable matrix math algorithms...leading eigenvector of the adjacency matrix of the pairwise affinity graph. We reviewed the matrix math implementation currently being used in PINT and...the new versions support a feature called matrix.distributed, which is some level of support for distributed matrix math ; however our code is not

  10. Kernel Extended Real-Valued Negative Selection Algorithm (KERNSA)

    DTIC Science & Technology

    2013-06-01

    are discarded, which is similar to how T-cells function in the BIS. An unlabeled, future sample is considered non -self if any detectors match it. This...Affinity Performs Best With Each type of Dataset 65 5.1.4 More Kernel Functions . . . . . . . . . . . . . . . . . . . . . . . . 65 5.1.5 Automate the...13 2.5 The Negative Selection Algorithm (NSA). . . . . . . . . . . . . . . . . . . . . 16 2.6 Illustration of self and non -self

  11. From scores to face templates: a model-based approach.

    PubMed

    Mohanty, Pranab; Sarkar, Sudeep; Kasturi, Rangachar

    2007-12-01

    Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.

  12. Brain tumor detection and segmentation in a CRF (conditional random fields) framework with pixel-pairwise affinity and superpixel-level features.

    PubMed

    Wu, Wei; Chen, Albert Y C; Zhao, Liang; Corso, Jason J

    2014-03-01

    Detection and segmentation of a brain tumor such as glioblastoma multiforme (GBM) in magnetic resonance (MR) images are often challenging due to its intrinsically heterogeneous signal characteristics. A robust segmentation method for brain tumor MRI scans was developed and tested. Simple thresholds and statistical methods are unable to adequately segment the various elements of the GBM, such as local contrast enhancement, necrosis, and edema. Most voxel-based methods cannot achieve satisfactory results in larger data sets, and the methods based on generative or discriminative models have intrinsic limitations during application, such as small sample set learning and transfer. A new method was developed to overcome these challenges. Multimodal MR images are segmented into superpixels using algorithms to alleviate the sampling issue and to improve the sample representativeness. Next, features were extracted from the superpixels using multi-level Gabor wavelet filters. Based on the features, a support vector machine (SVM) model and an affinity metric model for tumors were trained to overcome the limitations of previous generative models. Based on the output of the SVM and spatial affinity models, conditional random fields theory was applied to segment the tumor in a maximum a posteriori fashion given the smoothness prior defined by our affinity model. Finally, labeling noise was removed using "structural knowledge" such as the symmetrical and continuous characteristics of the tumor in spatial domain. The system was evaluated with 20 GBM cases and the BraTS challenge data set. Dice coefficients were computed, and the results were highly consistent with those reported by Zikic et al. (MICCAI 2012, Lecture notes in computer science. vol 7512, pp 369-376, 2012). A brain tumor segmentation method using model-aware affinity demonstrates comparable performance with other state-of-the art algorithms.

  13. An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system.

    PubMed

    AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide

    2015-11-19

    Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database in which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. This database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.

  14. Adaptive nearly optimal control for a class of continuous-time nonaffine nonlinear systems with inequality constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong

    2017-01-01

    The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging.

    PubMed

    Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul

    2011-12-19

    Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.

  16. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    NASA Astrophysics Data System (ADS)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  17. Affinity learning with diffusion on tensor product graph.

    PubMed

    Yang, Xingwei; Prasad, Lakshman; Latecki, Longin Jan

    2013-01-01

    In many applications, we are given a finite set of data points sampled from a data manifold and represented as a graph with edge weights determined by pairwise similarities of the samples. Often the pairwise similarities (which are also called affinities) are unreliable due to noise or due to intrinsic difficulties in estimating similarity values of the samples. As observed in several recent approaches, more reliable similarities can be obtained if the original similarities are diffused in the context of other data points, where the context of each point is a set of points most similar to it. Compared to the existing methods, our approach differs in two main aspects. First, instead of diffusing the similarity information on the original graph, we propose to utilize the tensor product graph (TPG) obtained by the tensor product of the original graph with itself. Since TPG takes into account higher order information, it is not a surprise that we obtain more reliable similarities. However, it comes at the price of higher order computational complexity and storage requirement. The key contribution of the proposed approach is that the information propagation on TPG can be computed with the same computational complexity and the same amount of storage as the propagation on the original graph. We prove that a graph diffusion process on TPG is equivalent to a novel iterative algorithm on the original graph, which is guaranteed to converge. After its convergence we obtain new edge weights that can be interpreted as new, learned affinities. We stress that the affinities are learned in an unsupervised setting. We illustrate the benefits of the proposed approach for data manifolds composed of shapes, images, and image patches on two very different tasks of image retrieval and image segmentation. With learned affinities, we achieve the bull's eye retrieval score of 99.99 percent on the MPEG-7 shape dataset, which is much higher than the state-of-the-art algorithms. When the data- points are image patches, the NCut with the learned affinities not only significantly outperforms the NCut with the original affinities, but it also outperforms state-of-the-art image segmentation methods.

  18. How Structure Defines Affinity in Protein-Protein Interactions

    PubMed Central

    Erijman, Ariel; Rosenthal, Eran; Shifman, Julia M.

    2014-01-01

    Protein-protein interactions (PPI) in nature are conveyed by a multitude of binding modes involving various surfaces, secondary structure elements and intermolecular interactions. This diversity results in PPI binding affinities that span more than nine orders of magnitude. Several early studies attempted to correlate PPI binding affinities to various structure-derived features with limited success. The growing number of high-resolution structures, the appearance of more precise methods for measuring binding affinities and the development of new computational algorithms enable more thorough investigations in this direction. Here, we use a large dataset of PPI structures with the documented binding affinities to calculate a number of structure-based features that could potentially define binding energetics. We explore how well each calculated biophysical feature alone correlates with binding affinity and determine the features that could be used to distinguish between high-, medium- and low- affinity PPIs. Furthermore, we test how various combinations of features could be applied to predict binding affinity and observe a slow improvement in correlation as more features are incorporated into the equation. In addition, we observe a considerable improvement in predictions if we exclude from our analysis low-resolution and NMR structures, revealing the importance of capturing exact intermolecular interactions in our calculations. Our analysis should facilitate prediction of new interactions on the genome scale, better characterization of signaling networks and design of novel binding partners for various target proteins. PMID:25329579

  19. Classification of neocortical interneurons using affinity propagation.

    PubMed

    Santana, Roberto; McGarry, Laura M; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael

    2013-01-01

    In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. In fact, neuronal classification is a difficult problem because it is unclear how to designate a neuronal cell class and what are the best characteristics to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological, or molecular characteristics, have provided quantitative and unbiased identification of distinct neuronal subtypes, when applied to selected datasets. However, better and more robust classification methods are needed for increasingly complex and larger datasets. Here, we explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. Affinity propagation outperformed Ward's method, a current standard clustering approach, in classifying the neurons into 4 subtypes. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits.

  20. Bounding the errors for convex dynamics on one or more polytopes.

    PubMed

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  1. Bounding the errors for convex dynamics on one or more polytopes

    NASA Astrophysics Data System (ADS)

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  2. Deformable structure registration of bladder through surface mapping.

    PubMed

    Xiong, Li; Viswanathan, Akila; Stewart, Alexandra J; Haker, Steven; Tempany, Clare M; Chin, Lee M; Cormack, Robert A

    2006-06-01

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractions of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Xiong; Viswanathan, Akila; Stewart, Alexandra J.

    Cumulative dose distributions in fractionated radiation therapy depict the dose to normal tissues and therefore may permit an estimation of the risk of normal tissue complications. However, calculation of these distributions is highly challenging because of interfractional changes in the geometry of patient anatomy. This work presents an algorithm for deformable structure registration of the bladder and the verification of the accuracy of the algorithm using phantom and patient data. In this algorithm, the registration process involves conformal mapping of genus zero surfaces using finite element analysis, and guided by three control landmarks. The registration produces a correspondence between fractionsmore » of the triangular meshes used to describe the bladder surface. For validation of the algorithm, two types of balloons were inflated gradually to three times their original size, and several computerized tomography (CT) scans were taken during the process. The registration algorithm yielded a local accuracy of 4 mm along the balloon surface. The algorithm was then applied to CT data of patients receiving fractionated high-dose-rate brachytherapy to the vaginal cuff, with the vaginal cylinder in situ. The patients' bladder filling status was intentionally different for each fraction. The three required control landmark points were identified for the bladder based on anatomy. Out of an Institutional Review Board (IRB) approved study of 20 patients, 3 had radiographically identifiable points near the bladder surface that were used for verification of the accuracy of the registration. The verification point as seen in each fraction was compared with its predicted location based on affine as well as deformable registration. Despite the variation in bladder shape and volume, the deformable registration was accurate to 5 mm, consistently outperforming the affine registration. We conclude that the structure registration algorithm presented works with reasonable accuracy and provides a means of calculating cumulative dose distributions.« less

  4. Clustering by soft-constraint affinity propagation: applications to gene-expression data.

    PubMed

    Leone, Michele; Sumedha; Weigt, Martin

    2007-10-15

    Similarity-measure-based clustering is a crucial problem appearing throughout scientific data analysis. Recently, a powerful new algorithm called Affinity Propagation (AP) based on message-passing techniques was proposed by Frey and Dueck (2007a). In AP, each cluster is identified by a common exemplar all other data points of the same cluster refer to, and exemplars have to refer to themselves. Albeit its proved power, AP in its present form suffers from a number of drawbacks. The hard constraint of having exactly one exemplar per cluster restricts AP to classes of regularly shaped clusters, and leads to suboptimal performance, e.g. in analyzing gene expression data. This limitation can be overcome by relaxing the AP hard constraints. A new parameter controls the importance of the constraints compared to the aim of maximizing the overall similarity, and allows to interpolate between the simple case where each data point selects its closest neighbor as an exemplar and the original AP. The resulting soft-constraint affinity propagation (SCAP) becomes more informative, accurate and leads to more stable clustering. Even though a new a priori free parameter is introduced, the overall dependence of the algorithm on external tuning is reduced, as robustness is increased and an optimal strategy for parameter selection emerges more naturally. SCAP is tested on biological benchmark data, including in particular microarray data related to various cancer types. We show that the algorithm efficiently unveils the hierarchical cluster structure present in the data sets. Further on, it allows to extract sparse gene expression signatures for each cluster.

  5. Reduced projection angles for binary tomography with particle aggregation.

    PubMed

    Al-Rifaie, Mohammad Majid; Blackwell, Tim

    This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.

  6. Concepts in receptor optimization: targeting the RGD peptide.

    PubMed

    Chen, Wei; Chang, Chia-en; Gilson, Michael K

    2006-04-12

    Synthetic receptors have a wide range of potential applications, but it has been difficult to design low molecular weight receptors that bind ligands with high, "proteinlike" affinities. This study uses novel computational methods to understand why it is hard to design a high-affinity receptor and to explore the limits of affinity, with the bioactive peptide RGD as a model ligand. The M2 modeling method is found to yield excellent agreement with experiment for a known RGD receptor and then is used to analyze a series of receptors generated in silico with a de novo design algorithm. Forces driving binding are found to be systematically opposed by proportionate repulsions due to desolvation and entropy. In particular, strong correlations are found between Coulombic attractions and the electrostatic desolvation penalty and between the mean energy change on binding and the cost in configurational entropy. These correlations help explain why it is hard to achieve high affinity. The change in surface area upon binding is found to correlate poorly with affinity within this series. Measures of receptor efficiency are formulated that summarize how effectively a receptor uses surface area, total energy, and Coulombic energy to achieve affinity. Analysis of the computed efficiencies suggests that a low molecular weight receptor can achieve proteinlike affinity. It is also found that macrocyclization of a receptor can, unexpectedly, increase the entropy cost of binding because the macrocyclic structure further restricts ligand motion.

  7. Scale-based fuzzy connectivity: a novel image segmentation methodology and its validation

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Udupa, Jayaram K.

    1999-05-01

    This paper extends a previously reported theory and algorithms for fuzzy connected object definition. It introduces `object scale' for determining the neighborhood size for defining affinity, the degree of local hanging togetherness between image elements. Object scale allows us to use a varying neighborhood size in different parts of the image. This paper argues that scale-based fuzzy connectivity is natural in object definition and demonstrates that this leads to a more effective object segmentation than without using scale in fuzzy concentrations. Affinity is described as consisting of a homogeneity-based and an object-feature- based component. Families of non scale-based and scale-based affinity relations are constructed. An effective method for giving a rough estimate of scale at different locations in the image is presented. The original theoretical and algorithmic framework remains more-or-less the same but considerably improved segmentations result. A quantitative statistical comparison between the non scale-based and the scale-based methods was made based on phantom images generated from patient MR brain studies by first segmenting the objects, and then by adding noise and blurring, and background component. Both the statistical and the subjective tests clearly indicate the superiority of scale- based method in capturing details and in robustness to noise.

  8. Image registration for a UV-Visible dual-band imaging system

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua

    2018-06-01

    The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.

  9. Enhanced Prediction of Src Homology 2 (SH2) Domain Binding Potentials Using a Fluorescence Polarization-derived c-Met, c-Kit, ErbB, and Androgen Receptor Interactome*

    PubMed Central

    Leung, Kin K.; Hause, Ronald J.; Barkinge, John L.; Ciaccio, Mark F.; Chuu, Chih-Pin; Jones, Richard B.

    2014-01-01

    Many human diseases are associated with aberrant regulation of phosphoprotein signaling networks. Src homology 2 (SH2) domains represent the major class of protein domains in metazoans that interact with proteins phosphorylated on the amino acid residue tyrosine. Although current SH2 domain prediction algorithms perform well at predicting the sequences of phosphorylated peptides that are likely to result in the highest possible interaction affinity in the context of random peptide library screens, these algorithms do poorly at predicting the interaction potential of SH2 domains with physiologically derived protein sequences. We employed a high throughput interaction assay system to empirically determine the affinity between 93 human SH2 domains and phosphopeptides abstracted from several receptor tyrosine kinases and signaling proteins. The resulting interaction experiments revealed over 1000 novel peptide-protein interactions and provided a glimpse into the common and specific interaction potentials of c-Met, c-Kit, GAB1, and the human androgen receptor. We used these data to build a permutation-based logistic regression classifier that performed considerably better than existing algorithms for predicting the interaction potential of several SH2 domains. PMID:24728074

  10. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    NASA Astrophysics Data System (ADS)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  11. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    2000-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  12. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  13. Algorithms for optimizing the treatment of depression: making the right decision at the right time.

    PubMed

    Adli, M; Rush, A J; Möller, H-J; Bauer, M

    2003-11-01

    Medication algorithms for the treatment of depression are designed to optimize both treatment implementation and the appropriateness of treatment strategies. Thus, they are essential tools for treating and avoiding refractory depression. Treatment algorithms are explicit treatment protocols that provide specific therapeutic pathways and decision-making tools at critical decision points throughout the treatment process. The present article provides an overview of major projects of algorithm research in the field of antidepressant therapy. The Berlin Algorithm Project and the Texas Medication Algorithm Project (TMAP) compare algorithm-guided treatments with treatment as usual. The Sequenced Treatment Alternatives to Relieve Depression Project (STAR*D) compares different treatment strategies in treatment-resistant patients.

  14. Static Analysis Numerical Algorithms

    DTIC Science & Technology

    2016-04-01

    represented by a collection of intervals (one for each variable) or a convex polyhedron (each dimension of the affine space representing a program variable...Another common abstract domain uses a set of linear constraints (i.e. an enclosing polyhedron ) to over-approximate the joint values of several

  15. An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide

    Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database inmore » which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. Lastly, this database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.« less

  16. An affinity-structure database of helix-turn-helix: DNA complexes with a universal coordinate system

    DOE PAGES

    AlQuraishi, Mohammed; Tang, Shengdong; Xia, Xide

    2015-11-19

    Molecular interactions between proteins and DNA molecules underlie many cellular processes, including transcriptional regulation, chromosome replication, and nucleosome positioning. Computational analyses of protein-DNA interactions rely on experimental data characterizing known protein-DNA interactions structurally and biochemically. While many databases exist that contain either structural or biochemical data, few integrate these two data sources in a unified fashion. Such integration is becoming increasingly critical with the rapid growth of structural and biochemical data, and the emergence of algorithms that rely on the synthesis of multiple data types to derive computational models of molecular interactions. We have developed an integrated affinity-structure database inmore » which the experimental and quantitative DNA binding affinities of helix-turn-helix proteins are mapped onto the crystal structures of the corresponding protein-DNA complexes. This database provides access to: (i) protein-DNA structures, (ii) quantitative summaries of protein-DNA binding affinities using position weight matrices, and (iii) raw experimental data of protein-DNA binding instances. Critically, this database establishes a correspondence between experimental structural data and quantitative binding affinity data at the single basepair level. Furthermore, we present a novel alignment algorithm that structurally aligns the protein-DNA complexes in the database and creates a unified residue-level coordinate system for comparing the physico-chemical environments at the interface between complexes. Using this unified coordinate system, we compute the statistics of atomic interactions at the protein-DNA interface of helix-turn-helix proteins. We provide an interactive website for visualization, querying, and analyzing this database, and a downloadable version to facilitate programmatic analysis. Lastly, this database will facilitate the analysis of protein-DNA interactions and the development of programmatic computational methods that capitalize on integration of structural and biochemical datasets. The database can be accessed at http://ProteinDNA.hms.harvard.edu.« less

  17. Possible world based consistency learning model for clustering and classifying uncertain data.

    PubMed

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A segmentation algorithm based on image projection for complex text layout

    NASA Astrophysics Data System (ADS)

    Zhu, Wangsheng; Chen, Qin; Wei, Chuanyi; Li, Ziyang

    2017-10-01

    Segmentation algorithm is an important part of layout analysis, considering the efficiency advantage of the top-down approach and the particularity of the object, a breakdown of projection layout segmentation algorithm. Firstly, the algorithm will algorithm first partitions the text image, and divided into several columns, then for each column scanning projection, the text image is divided into several sub regions through multiple projection. The experimental results show that, this method inherits the projection itself and rapid calculation speed, but also can avoid the effect of arc image information page segmentation, and also can accurate segmentation of the text image layout is complex.

  19. Computational design of environmental sensors for the potent opioid fentanyl

    DOE PAGES

    Bick, Matthew J.; Greisen, Per J.; Morey, Kevin J.; ...

    2017-09-19

    Here, we describe the computational design of proteins that bind the potent analgesic fentanyl. Our approach employs a fast docking algorithm to find shape complementary ligand placement in protein scaffolds, followed by design of the surrounding residues to optimize binding affinity. Co-crystal structures of the highest affinity binder reveal a highly preorganized binding site, and an overall architecture and ligand placement in close agreement with the design model. We also use the designs to generate plant sensors for fentanyl by coupling ligand binding to design stability. The method should be generally useful for detecting toxic hydrophobic compounds in the environment.

  20. Computational design of environmental sensors for the potent opioid fentanyl

    PubMed Central

    Morey, Kevin J; Antunes, Mauricio S; La, David; Sankaran, Banumathi; Reymond, Luc; Johnsson, Kai; Medford, June I

    2017-01-01

    We describe the computational design of proteins that bind the potent analgesic fentanyl. Our approach employs a fast docking algorithm to find shape complementary ligand placement in protein scaffolds, followed by design of the surrounding residues to optimize binding affinity. Co-crystal structures of the highest affinity binder reveal a highly preorganized binding site, and an overall architecture and ligand placement in close agreement with the design model. We use the designs to generate plant sensors for fentanyl by coupling ligand binding to design stability. The method should be generally useful for detecting toxic hydrophobic compounds in the environment. PMID:28925919

  1. Computational design of environmental sensors for the potent opioid fentanyl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bick, Matthew J.; Greisen, Per J.; Morey, Kevin J.

    Here, we describe the computational design of proteins that bind the potent analgesic fentanyl. Our approach employs a fast docking algorithm to find shape complementary ligand placement in protein scaffolds, followed by design of the surrounding residues to optimize binding affinity. Co-crystal structures of the highest affinity binder reveal a highly preorganized binding site, and an overall architecture and ligand placement in close agreement with the design model. We also use the designs to generate plant sensors for fentanyl by coupling ligand binding to design stability. The method should be generally useful for detecting toxic hydrophobic compounds in the environment.

  2. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    PubMed

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  3. Restoration algorithms for imaging through atmospheric turbulence

    DTIC Science & Technology

    2017-02-18

    the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed...with wipξq “ Gσp|Fpviqpξq|pq řM j“1Gσp|Fpvjqpξq|pq , where F denotes the Fourier transform (ξ are the frequencies) and Gσ is a Gaussian filter of...a combination of SIFT [26] and ORSA [14] algorithms) in order to remove affine transformations (translations, rotations and homothety). The authors

  4. Proposal of an Algorithm to Synthesize Music Suitable for Dance

    NASA Astrophysics Data System (ADS)

    Morioka, Hirofumi; Nakatani, Mie; Nishida, Shogo

    This paper proposes an algorithm for synthesizing music suitable for emotions in moving pictures. Our goal is to support multi-media content creation; web page design, animation films and so on. Here we adopt a human dance as a moving picture to examine the availability of our method. Because we think the dance image has high affinity with music. This algorithm is composed of three modules. The first is the module for computing emotions from an input dance image, the second is for computing emotions from music in the database and the last is for selecting music suitable for input dance via an interface of emotion.

  5. Shaking video stabilization with content completion

    NASA Astrophysics Data System (ADS)

    Peng, Yi; Ye, Qixiang; Liu, Yanmei; Jiao, Jianbin

    2009-01-01

    A new stabilization algorithm to counterbalance the shaking motion in a video based on classical Kandade-Lucas- Tomasi (KLT) method is presented in this paper. Feature points are evaluated with law of large numbers and clustering algorithm to reduce the side effect of moving foreground. Analysis on the change of motion direction is also carried out to detect the existence of shaking. For video clips with detected shaking, an affine transformation is performed to warp the current frame to the reference one. In addition, the missing content of a frame during the stabilization is completed with optical flow analysis and mosaicking operation. Experiments on video clips demonstrate the effectiveness of the proposed algorithm.

  6. Conservation laws and symmetries in stochastic thermodynamics.

    PubMed

    Polettini, Matteo; Bulnes-Cuetara, Gregory; Esposito, Massimiliano

    2016-11-01

    Phenomenological nonequilibrium thermodynamics describes how fluxes of conserved quantities, such as matter, energy, and charge, flow from outer reservoirs across a system and how they irreversibly degrade from one form to another. Stochastic thermodynamics is formulated in terms of probability fluxes circulating in the system's configuration space. The consistency of the two frameworks is granted by the condition of local detailed balance, which specifies the amount of physical quantities exchanged with the reservoirs during single transitions between configurations. We demonstrate that the topology of the configuration space crucially determines the number of independent thermodynamic affinities (forces) that the reservoirs generate across the system and provides a general algorithm that produces the fundamental affinities and their conjugate currents contributing to the total dissipation, based on the interplay between macroscopic conservations laws for the currents and microscopic symmetries of the affinities.

  7. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James W.

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emergingmore » memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a subset of the IEEE Floating Point Standard 754-2008, uses just 6 words to represent a “reproducible accumulator,” and requires just one read-only pass over the data, or one reduction in parallel. New instructions based on this work are being considered for inclusion in the future IEEE 754-2018 floating-point standard, and new reproducible BLAS are being considered for the next version of the BLAS standard.« less

  8. Characterizing informative sequence descriptors and predicting binding affinities of heterodimeric protein complexes.

    PubMed

    Srinivasulu, Yerukala Sathipati; Wang, Jyun-Rong; Hsu, Kai-Ti; Tsai, Ming-Ju; Charoenkwan, Phasit; Huang, Wen-Lin; Huang, Hui-Ling; Ho, Shinn-Ying

    2015-01-01

    Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein complexes. The characterization analysis revealed that the average numbers of beta turns and hydrogen bonds at protein-protein interfaces in high binding affinity complexes are more than those in low binding affinity complexes.

  9. Characterizing informative sequence descriptors and predicting binding affinities of heterodimeric protein complexes

    PubMed Central

    2015-01-01

    Background Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. Results This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. Conclusions The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein complexes. The characterization analysis revealed that the average numbers of beta turns and hydrogen bonds at protein-protein interfaces in high binding affinity complexes are more than those in low binding affinity complexes. PMID:26681483

  10. Forming The Guild: Star Power and Rethinking Projective Identity in Affinity Spaces

    ERIC Educational Resources Information Center

    Ellcessor, Elizabeth; Duncan, Sean C.

    2011-01-01

    This paper expands on Gee's (2004) notion of "affinity spaces" by placing them in the context of games, media stars, and their fans and combining cultural studies and new literacies approaches. The Guild, a web series about the misadventures of MMOplayers, written by and starring actor, writer, producer, and gamer Felicia Day, is…

  11. An efficient multi-resolution GA approach to dental image alignment

    NASA Astrophysics Data System (ADS)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  12. The research on the mean shift algorithm for target tracking

    NASA Astrophysics Data System (ADS)

    CAO, Honghong

    2017-06-01

    The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.

  13. Simultaneous and semi-alternating projection algorithms for solving split equality problems.

    PubMed

    Dong, Qiao-Li; Jiang, Dan

    2018-01-01

    In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.

  14. Classifier ensemble based on feature selection and diversity measures for predicting the affinity of A(2B) adenosine receptor antagonists.

    PubMed

    Bonet, Isis; Franco-Montero, Pedro; Rivero, Virginia; Teijeira, Marta; Borges, Fernanda; Uriarte, Eugenio; Morales Helguera, Aliuska

    2013-12-23

    A(2B) adenosine receptor antagonists may be beneficial in treating diseases like asthma, diabetes, diabetic retinopathy, and certain cancers. This has stimulated research for the development of potent ligands for this subtype, based on quantitative structure-affinity relationships. In this work, a new ensemble machine learning algorithm is proposed for classification and prediction of the ligand-binding affinity of A(2B) adenosine receptor antagonists. This algorithm is based on the training of different classifier models with multiple training sets (composed of the same compounds but represented by diverse features). The k-nearest neighbor, decision trees, neural networks, and support vector machines were used as single classifiers. To select the base classifiers for combining into the ensemble, several diversity measures were employed. The final multiclassifier prediction results were computed from the output obtained by using a combination of selected base classifiers output, by utilizing different mathematical functions including the following: majority vote, maximum and average probability. In this work, 10-fold cross- and external validation were used. The strategy led to the following results: i) the single classifiers, together with previous features selections, resulted in good overall accuracy, ii) a comparison between single classifiers, and their combinations in the multiclassifier model, showed that using our ensemble gave a better performance than the single classifier model, and iii) our multiclassifier model performed better than the most widely used multiclassifier models in the literature. The results and statistical analysis demonstrated the supremacy of our multiclassifier approach for predicting the affinity of A(2B) adenosine receptor antagonists, and it can be used to develop other QSAR models.

  15. Sequence2Vec: a novel embedding approach for modeling transcription factor binding affinity landscape.

    PubMed

    Dai, Hanjun; Umarov, Ramzan; Kuwahara, Hiroyuki; Li, Yu; Song, Le; Gao, Xin

    2017-11-15

    An accurate characterization of transcription factor (TF)-DNA affinity landscape is crucial to a quantitative understanding of the molecular mechanisms underpinning endogenous gene regulation. While recent advances in biotechnology have brought the opportunity for building binding affinity prediction methods, the accurate characterization of TF-DNA binding affinity landscape still remains a challenging problem. Here we propose a novel sequence embedding approach for modeling the transcription factor binding affinity landscape. Our method represents DNA binding sequences as a hidden Markov model which captures both position specific information and long-range dependency in the sequence. A cornerstone of our method is a novel message passing-like embedding algorithm, called Sequence2Vec, which maps these hidden Markov models into a common nonlinear feature space and uses these embedded features to build a predictive model. Our method is a novel combination of the strength of probabilistic graphical models, feature space embedding and deep learning. We conducted comprehensive experiments on over 90 large-scale TF-DNA datasets which were measured by different high-throughput experimental technologies. Sequence2Vec outperforms alternative machine learning methods as well as the state-of-the-art binding affinity prediction methods. Our program is freely available at https://github.com/ramzan1990/sequence2vec. xin.gao@kaust.edu.sa or lsong@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  16. Geometry-aware multiscale image registration via OBBTree-based polyaffine log-demons.

    PubMed

    Seiler, Christof; Pennec, Xavier; Reyes, Mauricio

    2011-01-01

    Non-linear image registration is an important tool in many areas of image analysis. For instance, in morphometric studies of a population of brains, free-form deformations between images are analyzed to describe the structural anatomical variability. Such a simple deformation model is justified by the absence of an easy expressible prior about the shape changes. Applying the same algorithms used in brain imaging to orthopedic images might not be optimal due to the difference in the underlying prior on the inter-subject deformations. In particular, using an un-informed deformation prior often leads to local minima far from the expected solution. To improve robustness and promote anatomically meaningful deformations, we propose a locally affine and geometry-aware registration algorithm that automatically adapts to the data. We build upon the log-domain demons algorithm and introduce a new type of OBBTree-based regularization in the registration with a natural multiscale structure. The regularization model is composed of a hierarchy of locally affine transformations via their logarithms. Experiments on mandibles show improved accuracy and robustness when used to initialize the demons, and even similar performance by direct comparison to the demons, with a significantly lower degree of freedom. This closes the gap between polyaffine and non-rigid registration and opens new ways to statistically analyze the registration results.

  17. Improved full analytical polygon-based method using Fourier analysis of the three-dimensional affine transformation.

    PubMed

    Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia

    2014-03-01

    Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.

  18. A Log-Euclidean polyaffine registration for articulated structures in medical images.

    PubMed

    Martín-Fernández, Miguel Angel; Martín-Fernández, Marcos; Alberola-López, Carlos

    2009-01-01

    In this paper we generalize the Log-Euclidean polyaffine registration framework of Arsigny et al. to deal with articulated structures. This framework has very useful properties as it guarantees the invertibility of smooth geometric transformations. In articulated registration a skeleton model is defined for rigid structures such as bones. The final transformation is affine for the bones and elastic for other tissues in the image. We extend the Arsigny el al.'s method to deal with locally-affine registration of pairs of wires. This enables the possibility of using this registration framework to deal with articulated structures. In this context, the design of the weighting functions, which merge the affine transformations defined for each pair of wires, has a great impact not only on the final result of the registration algorithm, but also on the invertibility of the global elastic transformation. Several experiments, using both synthetic images and hand radiographs, are also presented.

  19. Elastic models: a comparative study applied to retinal images.

    PubMed

    Karali, E; Lambropoulou, S; Koutsouris, D

    2011-01-01

    In this work various methods of parametric elastic models are compared, namely the classical snake, the gradient vector field snake (GVF snake) and the topology-adaptive snake (t-snake), as well as the method of self-affine mapping system as an alternative to elastic models. We also give a brief overview of the methods used. The self-affine mapping system is implemented using an adapting scheme and minimum distance as optimization criterion, which is more suitable for weak edges detection. All methods are applied to glaucomatic retinal images with the purpose of segmenting the optical disk. The methods are compared in terms of segmentation accuracy and speed, as these are derived from cross-correlation coefficients between real and algorithm extracted contours and segmentation time, respectively. As a result, the method of self-affine mapping system presents adequate segmentation time and segmentation accuracy, and significant independence from initialization.

  20. A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains

    NASA Astrophysics Data System (ADS)

    Rubagotti, Matteo; Zaccarian, Luca; Bemporad, Alberto

    2016-05-01

    This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.

  1. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  2. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    PubMed

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  3. Project resource reallocation algorithm

    NASA Technical Reports Server (NTRS)

    Myers, J. E.

    1981-01-01

    A methodology for adjusting baseline cost estimates according to project schedule changes is described. An algorithm which performs a linear expansion or contraction of the baseline project resource distribution in proportion to the project schedule expansion or contraction is presented. Input to the algorithm consists of the deck of cards (PACE input data) prepared for the baseline project schedule as well as a specification of the nature of the baseline schedule change. Output of the algorithm is a new deck of cards with all work breakdown structure block and element of cost estimates redistributed for the new project schedule. This new deck can be processed through PACE to produce a detailed cost estimate for the new schedule.

  4. Enhanced Image-Aided Navigation Algorithm with Automatic Calibration and Affine Distortion Prediction

    DTIC Science & Technology

    2012-03-01

    Lowe, David G. “Distinctive Image Features from Scale-Invariant Keypoints”. International Journal of Computer Vision, 2004. 13. Maybeck, Peter S...Fairfax Drive - 3rd Floor Arlington,VA 22203 Dr. Stefanie Tompkins ; (703)248–1540; Stefanie.Tompkins@darpa.mil DARPA Distribution A. Approved for Public

  5. 01-ERD-111 - The Development of Synthetic High Affinity Ligands

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, J; Balhorn, R; Cosman, M

    2004-02-05

    The aim of this project was to develop Synthetic High-Affinity Ligands (SHALs), which bind with high affinity and specificity to proteins of interest for national security and cancer therapy applications. The aim of producing synthetic ligands for sensory devices as an alternative to antibody-based detection assays and therapeutic agents is to overcome the drawbacks associated with antibody-based in next-generation sensors and systems. The focus area of the project was the chemical synthesis of the SHALs. The project concentrated on two different protein targets. (a) The C fragment of tetanus and botulinum toxin, potential biowarfare agents. A SHAL for tetanus ormore » botulinum toxin would be incorporated into a sensory device for the toxins. (b) HLA-DR10, a protein found in high abundance on the surface of Non-Hodgkins Lymphoma. A SHAL specific to a tumor marker, labeled with a radionuclide, would enable the targeted delivery of radiation therapy to metastatic disease. The technical approach used to develop a SHAL for each protein target will be described in more detail below. However, in general, the development of a SHAL requires a combination of computational modeling techniques, modern nuclear magnetic resonance spectroscopy (NMR) and synthetic chemistry.« less

  6. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    PubMed

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  7. Feature selection and classification of protein-protein complexes based on their binding affinities using machine learning approaches.

    PubMed

    Yugandhar, K; Gromiha, M Michael

    2014-09-01

    Protein-protein interactions are intrinsic to virtually every cellular process. Predicting the binding affinity of protein-protein complexes is one of the challenging problems in computational and molecular biology. In this work, we related sequence features of protein-protein complexes with their binding affinities using machine learning approaches. We set up a database of 185 protein-protein complexes for which the interacting pairs are heterodimers and their experimental binding affinities are available. On the other hand, we have developed a set of 610 features from the sequences of protein complexes and utilized Ranker search method, which is the combination of Attribute evaluator and Ranker method for selecting specific features. We have analyzed several machine learning algorithms to discriminate protein-protein complexes into high and low affinity groups based on their Kd values. Our results showed a 10-fold cross-validation accuracy of 76.1% with the combination of nine features using support vector machines. Further, we observed accuracy of 83.3% on an independent test set of 30 complexes. We suggest that our method would serve as an effective tool for identifying the interacting partners in protein-protein interaction networks and human-pathogen interactions based on the strength of interactions. © 2014 Wiley Periodicals, Inc.

  8. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  9. Unobtrusive Multi-Static Serial LiDAR Imager (UMSLI) First Generation Shape-Matching Based Classifier for 2D Contours

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Zheng; Ouyang, Bing; Principe, Jose

    A multi-static serial LiDAR system prototype was developed under DE-EE0006787 to detect, classify, and record interactions of marine life with marine hydrokinetic generation equipment. This software implements a shape-matching based classifier algorithm for the underwater automated detection of marine life for that system. In addition to applying shape descriptors, the algorithm also adopts information theoretical learning based affine shape registration, improving point correspondences found by shape descriptors as well as the final similarity measure.

  10. A dual method for optimal control problems with initial and final boundary constraints.

    NASA Technical Reports Server (NTRS)

    Pironneau, O.; Polak, E.

    1973-01-01

    This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.

  11. Increasing Prediction the Original Final Year Project of Student Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Saragih, Rijois Iboy Erwin; Turnip, Mardi; Sitanggang, Delima; Aritonang, Mendarissan; Harianja, Eva

    2018-04-01

    Final year project is very important forgraduation study of a student. Unfortunately, many students are not seriouslydidtheir final projects. Many of studentsask for someone to do it for them. In this paper, an application of genetic algorithms to predict the original final year project of a studentis proposed. In the simulation, the data of the final project for the last 5 years is collected. The genetic algorithm has several operators namely population, selection, crossover, and mutation. The result suggest that genetic algorithm can do better prediction than other comparable model. Experimental results of predicting showed that 70% was more accurate than the previous researched.

  12. The scattering variety

    NASA Astrophysics Data System (ADS)

    He, Yang-Hui; Matti, Cyril; Sun, Chuang

    2014-10-01

    The so-called Scattering Equations which govern the kinematics of the scattering of massless particles in arbitrary dimensions have recently been cast into a system of homogeneous polynomials. We study these as affine and projective geometries which we call Scattering Varieties by analyzing such properties as Hilbert series, Euler characteristic and singularities. Interestingly, we find structures such as affine Calabi-Yau threefolds as well as singular K3 and Fano varieties.

  13. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    PubMed Central

    2013-01-01

    Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting peptide-protein binding affinities. The proposed approach is flexible and can be applied to predict any quantitative biological activity. Moreover, generating reliable peptide-protein binding affinities will also improve system biology modelling of interaction pathways. Lastly, the method should be of value to a large segment of the research community with the potential to accelerate the discovery of peptide-based drugs and facilitate vaccine development. The proposed kernel is freely available at http://graal.ift.ulaval.ca/downloads/gs-kernel/. PMID:23497081

  14. MCL-CAw: a refinement of MCL for detecting yeast complexes from weighted PPI networks by incorporating core-attachment structure

    PubMed Central

    2010-01-01

    Background The reconstruction of protein complexes from the physical interactome of organisms serves as a building block towards understanding the higher level organization of the cell. Over the past few years, several independent high-throughput experiments have helped to catalogue enormous amount of physical protein interaction data from organisms such as yeast. However, these individual datasets show lack of correlation with each other and also contain substantial number of false positives (noise). Over these years, several affinity scoring schemes have also been devised to improve the qualities of these datasets. Therefore, the challenge now is to detect meaningful as well as novel complexes from protein interaction (PPI) networks derived by combining datasets from multiple sources and by making use of these affinity scoring schemes. In the attempt towards tackling this challenge, the Markov Clustering algorithm (MCL) has proved to be a popular and reasonably successful method, mainly due to its scalability, robustness, and ability to work on scored (weighted) networks. However, MCL produces many noisy clusters, which either do not match known complexes or have additional proteins that reduce the accuracies of correctly predicted complexes. Results Inspired by recent experimental observations by Gavin and colleagues on the modularity structure in yeast complexes and the distinctive properties of "core" and "attachment" proteins, we develop a core-attachment based refinement method coupled to MCL for reconstruction of yeast complexes from scored (weighted) PPI networks. We combine physical interactions from two recent "pull-down" experiments to generate an unscored PPI network. We then score this network using available affinity scoring schemes to generate multiple scored PPI networks. The evaluation of our method (called MCL-CAw) on these networks shows that: (i) MCL-CAw derives larger number of yeast complexes and with better accuracies than MCL, particularly in the presence of natural noise; (ii) Affinity scoring can effectively reduce the impact of noise on MCL-CAw and thereby improve the quality (precision and recall) of its predicted complexes; (iii) MCL-CAw responds well to most available scoring schemes. We discuss several instances where MCL-CAw was successful in deriving meaningful complexes, and where it missed a few proteins or whole complexes due to affinity scoring of the networks. We compare MCL-CAw with several recent complex detection algorithms on unscored and scored networks, and assess the relative performance of the algorithms on these networks. Further, we study the impact of augmenting physical datasets with computationally inferred interactions for complex detection. Finally, we analyse the essentiality of proteins within predicted complexes to understand a possible correlation between protein essentiality and their ability to form complexes. Conclusions We demonstrate that core-attachment based refinement in MCL-CAw improves the predictions of MCL on yeast PPI networks. We show that affinity scoring improves the performance of MCL-CAw. PMID:20939868

  15. Estimating the size of the solution space of metabolic networks

    PubMed Central

    Braunstein, Alfredo; Mulet, Roberto; Pagnani, Andrea

    2008-01-01

    Background Cellular metabolism is one of the most investigated system of biological interactions. While the topological nature of individual reactions and pathways in the network is quite well understood there is still a lack of comprehension regarding the global functional behavior of the system. In the last few years flux-balance analysis (FBA) has been the most successful and widely used technique for studying metabolism at system level. This method strongly relies on the hypothesis that the organism maximizes an objective function. However only under very specific biological conditions (e.g. maximization of biomass for E. coli in reach nutrient medium) the cell seems to obey such optimization law. A more refined analysis not assuming extremization remains an elusive task for large metabolic systems due to algorithmic limitations. Results In this work we propose a novel algorithmic strategy that provides an efficient characterization of the whole set of stable fluxes compatible with the metabolic constraints. Using a technique derived from the fields of statistical physics and information theory we designed a message-passing algorithm to estimate the size of the affine space containing all possible steady-state flux distributions of metabolic networks. The algorithm, based on the well known Bethe approximation, can be used to approximately compute the volume of a non full-dimensional convex polytope in high dimensions. We first compare the accuracy of the predictions with an exact algorithm on small random metabolic networks. We also verify that the predictions of the algorithm match closely those of Monte Carlo based methods in the case of the Red Blood Cell metabolic network. Then we test the effect of gene knock-outs on the size of the solution space in the case of E. coli central metabolism. Finally we analyze the statistical properties of the average fluxes of the reactions in the E. coli metabolic network. Conclusion We propose a novel efficient distributed algorithmic strategy to estimate the size and shape of the affine space of a non full-dimensional convex polytope in high dimensions. The method is shown to obtain, quantitatively and qualitatively compatible results with the ones of standard algorithms (where this comparison is possible) being still efficient on the analysis of large biological systems, where exact deterministic methods experience an explosion in algorithmic time. The algorithm we propose can be considered as an alternative to Monte Carlo sampling methods. PMID:18489757

  16. Biomolecular Interaction Analysis Using an Optical Surface Plasmon Resonance Biosensor: The Marquardt Algorithm vs Newton Iteration Algorithm

    PubMed Central

    Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan

    2015-01-01

    Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997

  17. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  18. A novel computer algorithm improves antibody epitope prediction using affinity-selected mimotopes: a case study using monoclonal antibodies against the West Nile virus E protein.

    PubMed

    Denisova, Galina F; Denisov, Dimitri A; Yeung, Jeffrey; Loeb, Mark B; Diamond, Michael S; Bramson, Jonathan L

    2008-11-01

    Understanding antibody function is often enhanced by knowledge of the specific binding epitope. Here, we describe a computer algorithm that permits epitope prediction based on a collection of random peptide epitopes (mimotopes) isolated by antibody affinity purification. We applied this methodology to the prediction of epitopes for five monoclonal antibodies against the West Nile virus (WNV) E protein, two of which exhibit therapeutic activity in vivo. This strategy was validated by comparison of our results with existing F(ab)-E protein crystal structures and mutational analysis by yeast surface display. We demonstrate that by combining the results of the mimotope method with our data from mutational analysis, epitopes could be predicted with greater certainty. The two methods displayed great complementarity as the mutational analysis facilitated epitope prediction when the results with the mimotope method were equivocal and the mimotope method revealed a broader number of residues within the epitope than the mutational analysis. Our results demonstrate that the combination of these two prediction strategies provides a robust platform for epitope characterization.

  19. A Smartphone Indoor Localization Algorithm Based on WLAN Location Fingerprinting with Feature Extraction and Clustering.

    PubMed

    Luo, Junhai; Fu, Liang

    2017-06-09

    With the development of communication technology, the demand for location-based services is growing rapidly. This paper presents an algorithm for indoor localization based on Received Signal Strength (RSS), which is collected from Access Points (APs). The proposed localization algorithm contains the offline information acquisition phase and online positioning phase. Firstly, the AP selection algorithm is reviewed and improved based on the stability of signals to remove useless AP; secondly, Kernel Principal Component Analysis (KPCA) is analyzed and used to remove the data redundancy and maintain useful characteristics for nonlinear feature extraction; thirdly, the Affinity Propagation Clustering (APC) algorithm utilizes RSS values to classify data samples and narrow the positioning range. In the online positioning phase, the classified data will be matched with the testing data to determine the position area, and the Maximum Likelihood (ML) estimate will be employed for precise positioning. Eventually, the proposed algorithm is implemented in a real-world environment for performance evaluation. Experimental results demonstrate that the proposed algorithm improves the accuracy and computational complexity.

  20. Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis

    PubMed Central

    Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2015-01-01

    Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408

  1. Research on the Diesel Engine with Sliding Mode Variable Structure Theory

    NASA Astrophysics Data System (ADS)

    Ma, Zhexuan; Mao, Xiaobing; Cai, Le

    2018-05-01

    This study constructed the nonlinear mathematical model of the diesel engine high-pressure common rail (HPCR) system through two polynomial fitting which was treated as a kind of affine nonlinear system. Based on sliding-mode variable structure control (SMVSC) theory, a sliding-mode controller for affine nonlinear systems was designed for achieving the control of common rail pressure and the diesel engine’s rotational speed. Finally, on the simulation platform of MATLAB, the designed nonlinear HPCR system was simulated. The simulation results demonstrated that sliding-mode variable structure control algorithm shows favourable control performances which are overcoming the shortcomings of traditional PID control in overshoot, parameter adjustment, system precision, adjustment time and ascending time.

  2. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  3. ITC-derived binding affinity may be biased due to titrant (nano)-aggregation. Binding of halogenated benzotriazoles to the catalytic domain of human protein kinase CK2

    PubMed Central

    Winiewska, Maria; Bugajska, Ewa

    2017-01-01

    The binding of four bromobenzotriazoles to the catalytic subunit of human protein kinase CK2 was assessed by two complementary methods: Microscale Thermophoresis (MST) and Isothermal Titration Calorimetry (ITC). New algorithm proposed for the global analysis of MST pseudo-titration data enabled reliable determination of binding affinities for two distinct sites, a relatively strong one with the Kd of the order of 100 nM and a substantially weaker one (Kd > 1 μM). The affinities for the strong binding site determined for the same protein-ligand systems using ITC were in most cases approximately 10-fold underestimated. The discrepancy was assigned directly to the kinetics of ligand nano-aggregates decay occurring upon injection of the concentrated ligand solution to the protein sample. The binding affinities determined in the reverse ITC experiment, in which ligands were titrated with a concentrated protein solution, agreed with the MST-derived data. Our analysis suggests that some ITC-derived Kd values, routinely reported together with PDB structures of protein-ligand complexes, may be biased due to the uncontrolled ligand (nano)-aggregation, which may occur even substantially below the solubility limit. PMID:28273138

  4. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  5. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  6. Projection pursuit water quality evaluation model based on chicken swam algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Zhe

    2018-03-01

    In view of the uncertainty and ambiguity of each index in water quality evaluation, in order to solve the incompatibility of evaluation results of individual water quality indexes, a projection pursuit model based on chicken swam algorithm is proposed. The projection index function which can reflect the water quality condition is constructed, the chicken group algorithm (CSA) is introduced, the projection index function is optimized, the best projection direction of the projection index function is sought, and the best projection value is obtained to realize the water quality evaluation. The comparison between this method and other methods shows that it is reasonable and feasible to provide decision-making basis for water pollution control in the basin.

  7. A hybrid algorithm for clustering of time series data based on affinity search technique.

    PubMed

    Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A; Shaygan, Mohammad Amin; Jalali, Alireza

    2014-01-01

    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.

  8. A Hybrid Algorithm for Clustering of Time Series Data Based on Affinity Search Technique

    PubMed Central

    Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A.; Shaygan, Mohammad Amin; Jalali, Alireza

    2014-01-01

    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets. PMID:24982966

  9. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    PubMed

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  10. Projections of climate-driven changes in tuna vertical habitat based on species-specific differences in blood oxygen affinity.

    PubMed

    Mislan, K A S; Deutsch, Curtis A; Brill, Richard W; Dunne, John P; Sarmiento, Jorge L

    2017-10-01

    Oxygen concentrations are hypothesized to decrease in many areas of the ocean as a result of anthropogenically driven climate change, resulting in habitat compression for pelagic animals. The oxygen partial pressure, pO 2 , at which blood is 50% saturated (P 50 ) is a measure of blood oxygen affinity and a gauge of the tolerance of animals for low ambient oxygen. Tuna species display a wide range of blood oxygen affinities (i.e., P 50 values) and therefore may be differentially impacted by habitat compression as they make extensive vertical movements to forage on subdaily time scales. To project the effects of end-of-the-century climate change on tuna habitat, we calculate tuna P 50 depths (i.e., the vertical position in the water column at which ambient pO 2 is equal to species-specific blood P 50 values) from 21st century Earth System Model (ESM) projections included in the fifth phase of the Climate Model Intercomparison Project (CMIP5). Overall, we project P 50 depths to shoal, indicating likely habitat compression for tuna species due to climate change. Tunas that will be most impacted by shoaling are Pacific and southern bluefin tunas-habitat compression is projected for the entire geographic range of Pacific bluefin tuna and for the spawning region of southern bluefin tuna. Vertical shifts in P 50 depths will potentially influence resource partitioning among Pacific bluefin, bigeye, yellowfin, and skipjack tunas in the northern subtropical and eastern tropical Pacific Ocean, the Arabian Sea, and the Bay of Bengal. By establishing linkages between tuna physiology and environmental conditions, we provide a mechanistic basis to project the effects of anthropogenic climate change on tuna habitats. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  11. GDPC: Gravitation-based Density Peaks Clustering algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Jianhua; Hao, Dehao; Chen, Yujun; Parmar, Milan; Li, Keqin

    2018-07-01

    The Density Peaks Clustering algorithm, which we refer to as DPC, is a novel and efficient density-based clustering approach, and it is published in Science in 2014. The DPC has advantages of discovering clusters with varying sizes and varying densities, but has some limitations of detecting the number of clusters and identifying anomalies. We develop an enhanced algorithm with an alternative decision graph based on gravitation theory and nearby distance to identify centroids and anomalies accurately. We apply our method to some UCI and synthetic data sets. We report comparative clustering performances using F-Measure and 2-dimensional vision. We also compare our method to other clustering algorithms, such as K-Means, Affinity Propagation (AP) and DPC. We present F-Measure scores and clustering accuracies of our GDPC algorithm compared to K-Means, AP and DPC on different data sets. We show that the GDPC has the superior performance in its capability of: (1) detecting the number of clusters obviously; (2) aggregating clusters with varying sizes, varying densities efficiently; (3) identifying anomalies accurately.

  12. Thioaptamer Diagnostic System (TDS)

    NASA Technical Reports Server (NTRS)

    Yang, Xianbin

    2015-01-01

    AM Biotechnologies, LLC, in partnership with Sandia National Laboratories, has developed a diagnostic device that quickly detects sampled biomarkers. The TDS quickly quantifies clinically relevant biomarkers using only microliters of a single sample. The system combines ambient-stable, long shelf-life affinity assays with handheld, microfluidic gel electrophoresis affinity assay quantification technology. The TDS is easy to use, operates in microgravity, and permits simultaneous quantification of 32 biomarkers. In Phase I of the project, the partners demonstrated that a thioaptamer assay used in the microfluidic instrument could quantify a specific biomarker in serum in the low nanomolar range. The team also identified novel affinity agents to bone-specific alkaline phosphatase (BAP) and demonstrated their ability to detect BAP with the microfluidic instrument. In Phase II, AM Biotech expanded the number of ambient affinity agents and demonstrated a TDS prototype. In the long term, the clinical version of the TDS will provide a robust, flight-tested diagnostic capability for space exploration missions.

  13. Software for project-based learning of robot motion planning

    NASA Astrophysics Data System (ADS)

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-12-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.

  14. Novel particle tracking algorithm based on the Random Sample Consensus Model for the Active Target Time Projection Chamber (AT-TPC)

    NASA Astrophysics Data System (ADS)

    Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco

    2018-02-01

    The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.

  15. Multiobject relative fuzzy connectedness and its implications in image segmentation

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Saha, Punam K.

    2001-07-01

    The notion of fuzzy connectedness captures the idea of hanging-togetherness of image elements in an object by assigning a strength of connectedness to every possible path between every possible pair of image elements. This concept leads to powerful image segmentation algorithms based on dynamic programming whose effectiveness has been demonstrated on 1000s of images in a variety of applications. In a previous framework, we introduced the notion of relative fuzzy connectedness for separating a foreground object from a background object. In this framework, an image element c is considered to belong to that among these two objects with respect to whose reference image element c has the higher strength of connectedness. In fuzzy connectedness, a local fuzzy reflation called affinity is used on the image domain. This relation was required for theoretical reasons to be of fixed form in the previous framework. In the present paper, we generalize relative connectedness to multiple objects, allowing all objects (of importance) to compete among themselves to grab membership of image elements based on their relative strength of connectedness to reference elements. We also allow affinity to be tailored to the individual objects. We present a theoretical and algorithmic framework and demonstrate that the objects defined are independent of the reference elements chosen as long as they are not in the fuzzy boundary between objects. Examples from medical imaging are presented to illustrate visually the effectiveness of multiple object relative fuzzy connectedness. A quantitative evaluation based on 160 mathematical phantom images demonstrates objectively the effectiveness of relative fuzzy connectedness with object- tailored affinity relation.

  16. Prescribed performance distributed consensus control for nonlinear multi-agent systems with unknown dead-zone input

    NASA Astrophysics Data System (ADS)

    Cui, Guozeng; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Zhang, Zhengqiang

    2018-05-01

    In this paper, the problem of prescribed performance distributed output consensus for higher-order non-affine nonlinear multi-agent systems with unknown dead-zone input is investigated. Fuzzy logical systems are utilised to identify the unknown nonlinearities. By introducing prescribed performance, the transient and steady performance of synchronisation errors are guaranteed. Based on Lyapunov stability theory and the dynamic surface control technique, a new distributed consensus algorithm for non-affine nonlinear multi-agent systems is proposed, which ensures cooperatively uniformly ultimately boundedness of all signals in the closed-loop systems and enables the output of each follower to synchronise with the leader within predefined bounded error. Finally, simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.

  17. Motion Estimation and Compensation Strategies in Dynamic Computerized Tomography

    NASA Astrophysics Data System (ADS)

    Hahn, Bernadette N.

    2017-12-01

    A main challenge in computerized tomography consists in imaging moving objects. Temporal changes during the measuring process lead to inconsistent data sets, and applying standard reconstruction techniques causes motion artefacts which can severely impose a reliable diagnostics. Therefore, novel reconstruction techniques are required which compensate for the dynamic behavior. This article builds on recent results from a microlocal analysis of the dynamic setting, which enable us to formulate efficient analytic motion compensation algorithms for contour extraction. Since these methods require information about the dynamic behavior, we further introduce a motion estimation approach which determines parameters of affine and certain non-affine deformations directly from measured motion-corrupted Radon-data. Our methods are illustrated with numerical examples for both types of motion.

  18. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  19. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penfold, S; Casiraghi, M; Dou, T

    2015-06-15

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide themore » system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm.« less

  20. A Methodology for Projecting U.S.-Flag Commercial Tanker Capacity

    DTIC Science & Technology

    1986-03-01

    total crude supply for the total US is less than the sum of the total crude supplies of the PADDs . The algorithm generating the output shown in tables...other PADDs . Accordingly, projected receipts for PADD V are zero , and in conjunction with the values for the vari- ables that previously were...SHIPMENTS ALGORITHM This section presents the mathematics of the algorithm that generates the shipments projections for each PADD . The notation

  1. Accurate Binding Free Energy Predictions in Fragment Optimization.

    PubMed

    Steinbrecher, Thomas B; Dahlgren, Markus; Cappel, Daniel; Lin, Teng; Wang, Lingle; Krilov, Goran; Abel, Robert; Friesner, Richard; Sherman, Woody

    2015-11-23

    Predicting protein-ligand binding free energies is a central aim of computational structure-based drug design (SBDD)--improved accuracy in binding free energy predictions could significantly reduce costs and accelerate project timelines in lead discovery and optimization. The recent development and validation of advanced free energy calculation methods represents a major step toward this goal. Accurately predicting the relative binding free energy changes of modifications to ligands is especially valuable in the field of fragment-based drug design, since fragment screens tend to deliver initial hits of low binding affinity that require multiple rounds of synthesis to gain the requisite potency for a project. In this study, we show that a free energy perturbation protocol, FEP+, which was previously validated on drug-like lead compounds, is suitable for the calculation of relative binding strengths of fragment-sized compounds as well. We study several pharmaceutically relevant targets with a total of more than 90 fragments and find that the FEP+ methodology, which uses explicit solvent molecular dynamics and physics-based scoring with no parameters adjusted, can accurately predict relative fragment binding affinities. The calculations afford R(2)-values on average greater than 0.5 compared to experimental data and RMS errors of ca. 1.1 kcal/mol overall, demonstrating significant improvements over the docking and MM-GBSA methods tested in this work and indicating that FEP+ has the requisite predictive power to impact fragment-based affinity optimization projects.

  2. Evaluation of an Inverse Molecular Design Algorithm in a Model Binding Site

    PubMed Central

    Huggins, David J.; Altman, Michael D.; Tidor, Bruce

    2008-01-01

    Computational molecular design is a useful tool in modern drug discovery. Virtual screening is an approach that docks and then scores individual members of compound libraries. In contrast to this forward approach, inverse approaches construct compounds from fragments, such that the computed affinity, or a combination of relevant properties, is optimized. We have recently developed a new inverse approach to drug design based on the dead-end elimination and A* algorithms employing a physical potential function. This approach has been applied to combinatorially constructed libraries of small-molecule ligands to design high-affinity HIV-1 protease inhibitors [M. D. Altman et al. J. Am. Chem. Soc. 130: 6099–6013, 2008]. Here we have evaluated the new method using the well studied W191G mutant of cytochrome c peroxidase. This mutant possesses a charged binding pocket and has been used to evaluate other design approaches. The results show that overall the new inverse approach does an excellent job of separating binders from non-binders. For a few individual cases, scoring inaccuracies led to false positives. The majority of these involve erroneous solvation energy estimation for charged amines, anilinium ions and phenols, which has been observed previously for a variety of scoring algorithms. Interestingly, although inverse approaches are generally expected to identify some but not all binders in a library, due to limited conformational searching, these results show excellent coverage of the known binders while still showing strong discrimination of the non-binders. PMID:18831031

  3. Evaluation of an inverse molecular design algorithm in a model binding site.

    PubMed

    Huggins, David J; Altman, Michael D; Tidor, Bruce

    2009-04-01

    Computational molecular design is a useful tool in modern drug discovery. Virtual screening is an approach that docks and then scores individual members of compound libraries. In contrast to this forward approach, inverse approaches construct compounds from fragments, such that the computed affinity, or a combination of relevant properties, is optimized. We have recently developed a new inverse approach to drug design based on the dead-end elimination and A* algorithms employing a physical potential function. This approach has been applied to combinatorially constructed libraries of small-molecule ligands to design high-affinity HIV-1 protease inhibitors (Altman et al., J Am Chem Soc 2008;130:6099-6013). Here we have evaluated the new method using the well-studied W191G mutant of cytochrome c peroxidase. This mutant possesses a charged binding pocket and has been used to evaluate other design approaches. The results show that overall the new inverse approach does an excellent job of separating binders from nonbinders. For a few individual cases, scoring inaccuracies led to false positives. The majority of these involve erroneous solvation energy estimation for charged amines, anilinium ions, and phenols, which has been observed previously for a variety of scoring algorithms. Interestingly, although inverse approaches are generally expected to identify some but not all binders in a library, due to limited conformational searching, these results show excellent coverage of the known binders while still showing strong discrimination of the nonbinders. (c) 2008 Wiley-Liss, Inc.

  4. Off-Angle Iris Correction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Thompson, Joseph T; Karakaya, Mahmut

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not accountmore » for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.« less

  5. Density-Aware Clustering Based on Aggregated Heat Kernel and Its Transformation

    DOE PAGES

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...

    2015-06-01

    Current spectral clustering algorithms suffer from the sensitivity to existing noise, and parameter scaling, and may not be aware of different density distributions across clusters. If these problems are left untreated, the consequent clustering results cannot accurately represent true data patterns, in particular, for complex real world datasets with heterogeneous densities. This paper aims to solve these problems by proposing a diffusion-based Aggregated Heat Kernel (AHK) to improve the clustering stability, and a Local Density Affinity Transformation (LDAT) to correct the bias originating from different cluster densities. AHK statistically\\ models the heat diffusion traces along the entire time scale, somore » it ensures robustness during clustering process, while LDAT probabilistically reveals local density of each instance and suppresses the local density bias in the affinity matrix. Our proposed framework integrates these two techniques systematically. As a result, not only does it provide an advanced noise-resisting and density-aware spectral mapping to the original dataset, but also demonstrates the stability during the processing of tuning the scaling parameter (which usually controls the range of neighborhood). Furthermore, our framework works well with the majority of similarity kernels, which ensures its applicability to many types of data and problem domains. The systematic experiments on different applications show that our proposed algorithms outperform state-of-the-art clustering algorithms for the data with heterogeneous density distributions, and achieve robust clustering performance with respect to tuning the scaling parameter and handling various levels and types of noise.« less

  6. RCLUS, a new program for clustering associated species: A demonstration using a Mojave Desert plant community dataset

    Treesearch

    Stewart C. Sanderson; Jeffrey E. Ott; E. Durant McArthur; Kimball T. Harper

    2006-01-01

    This paper presents a new clustering program named RCLUS that was developed for species (R-mode) analysis of plant community data. RCLUS identifies clusters of co-occurring species that meet a user-specified cutoff level of positive association with each other. The "strict affinity" clustering algorithm in RCLUS builds clusters of species whose pairwise...

  7. Derivative Free Gradient Projection Algorithms for Rotation

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2004-01-01

    A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…

  8. Access Restoration Project Task 1.2 Report 2 (of 2) Algorithms for Debris Volume and Water Depth Computation : Appendix A

    DOT National Transportation Integrated Search

    0000-01-01

    n the Access Restoration Project Task 1.2 Report 1, the algorithms for detecting roadway debris piles and flooded areas were described in detail. Those algorithms take CRS data as input and automatically detect the roadway obstructions. Although the ...

  9. A Virtual Screening Approach for the Identification of High Affinity Small Molecules Targeting BCR-ABL1 Inhibitors for the Treatment of Chronic Myeloid Leukemia.

    PubMed

    Sharda, Saphy; Sarmandal, Palash; Cherukommu, Shirisha; Dindhoria, Kiran; Yadav, Manisha; Bandaru, Srinivas; Sharma, Anudeep; Sakhi, Aditi; Vyas, Tanmay; Hussain, Tajamul; Nayarisseri, Anuraj; Singh, Sanjeev Kumar

    2017-01-01

    CML originates due to reciprocal translocation in Philadelphia chromosome leading to the formation of fusion product BCR-ABL which constitutively activates tyrosine kinase signaling pathways eventually leading to abnormal proliferation of granulocytic cells. As a therapeutic strategy, BCR-ABL inhibitors have been clinically approved which terminates its phosphorylation activity and retards cancer progression. However, a number of patients develop resistance to inhibitors which demand for the discovery of new inhibitors. Given the drawbacks of present inhibitors, by high throughput virtual screening approaches, present study pursues to identify high affinity compounds targeting BCR-ABL1 anticipated to have safer pharmacological profiles. Five established BCR-ABL inhibitors formed the query compounds for identification of structurally similar compounds by Tanimoto coefficient based linear fingerprint search with a threshold of 95% against PubChemdatabase. Assisted by MolDock algorithm all compounds were docked against BCR-ABL protein in order to retrieve high affinity compounds. The parents and similars were further tested for their ADMET propertiesand bioactivity. Rebastinib formed higher affinity inhibitor than rest of the four established compound investigated in the study. Interestingly, Rebastinib similar compound with Pubchem ID: 67254402 was also shown to have highest affinity than other similars including the similars of respective five parents. In terms of ADMET properties Pubchem ID: 67254402 had appreciable ADMET profile and bioactivity. However, Rebastinib still stood as the best inhibitor in terms of binding affinity and ADMET properties than Pubchem ID: 67254402. Nevertheless, owing to the similar pharmacological properties with Rebastinib, Pubchem ID: 67254402 can be expected to form potential BCR-ABL inhibitor. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Dual signal subspace projection (DSSP): a novel algorithm for removing large interference in biomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Kawabata, Yuya; Ushio, Shuta; Sumiya, Satoshi; Kawabata, Shigenori; Adachi, Yoshiaki; Nagarajan, Srikantan S.

    2016-06-01

    Objective. In functional electrophysiological imaging, signals are often contaminated by interference that can be of considerable magnitude compared to the signals of interest. This paper proposes a novel algorithm for removing such interferences that does not require separate noise measurements. Approach. The algorithm is based on a dual definition of the signal subspace in the spatial- and time-domains. Since the algorithm makes use of this duality, it is named the dual signal subspace projection (DSSP). The DSSP algorithm first projects the columns of the measured data matrix onto the inside and outside of the spatial-domain signal subspace, creating a set of two preprocessed data matrices. The intersection of the row spans of these two matrices is estimated as the time-domain interference subspace. The original data matrix is projected onto the subspace that is orthogonal to this interference subspace. Main results. The DSSP algorithm is validated by using the computer simulation, and using two sets of real biomagnetic data: spinal cord evoked field data measured from a healthy volunteer and magnetoencephalography data from a patient with a vagus nerve stimulator. Significance. The proposed DSSP algorithm is effective for removing overlapped interference in a wide variety of biomagnetic measurements.

  11. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  12. Dyadic Instruction for Middle School Students: Liking Promotes Learning

    PubMed Central

    Hartl, Amy C.; DeLay, Dawn; Laursen, Brett; Denner, Jill; Werner, Linda; Campe, Shannon; Ortiz, Eloy

    2015-01-01

    This study examines whether friendship facilitates or hinders learning in a dyadic instructional setting. Working in 80 same-sex pairs, 160 (60 girls, 100 boys) middle school students (M = 12.13 years old) were taught a new computer programming language and programmed a game. Students spent 14 to 30 (M = 22.7) hours in a programming class. At the beginning and the end of the project, each participant separately completed (a) computer programming knowledge assessments and (b) questionnaires rating their affinity for their partner. Results support the proposition that liking promotes learning: Greater partner affinity predicted greater subsequent increases in computer programming knowledge for both partners. One partner’s initial programming knowledge also positively predicted the other partner’s subsequent partner affinity. PMID:26688658

  13. High Resolution Image Reconstruction from Projection of Low Resolution Images DIffering in Subpixel Shifts

    NASA Technical Reports Server (NTRS)

    Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome

    2016-01-01

    In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.

  14. Simultaneous intrinsic and extrinsic calibration of a laser deflecting tilting mirror in the projective voltage space.

    PubMed

    Schneider, Adrian; Pezold, Simon; Baek, Kyung-Won; Marinov, Dilyan; Cattin, Philippe C

    2016-09-01

    PURPOSE  : During the past five decades, laser technology emerged and is nowadays part of a great number of scientific and industrial applications. In the medical field, the integration of laser technology is on the rise and has already been widely adopted in contemporary medical applications. However, it is new to use a laser to cut bone and perform general osteotomy surgical tasks with it. In this paper, we describe a method to calibrate a laser deflecting tilting mirror and integrate it into a sophisticated laser osteotome, involving next generation robots and optical tracking. METHODS  : A mathematical model was derived, which describes a controllable deflection mirror by the general projective transformation. This makes the application of well-known camera calibration methods possible. In particular, the direct linear transformation algorithm is applied to calibrate and integrate a laser deflecting tilting mirror into the affine transformation chain of a surgical system. RESULTS  : Experiments were performed on synthetic generated calibration input, and the calibration was tested with real data. The determined target registration errors in a working distance of 150 mm for both simulated input and real data agree at the declared noise level of the applied optical 3D tracking system: The evaluation of the synthetic input showed an error of 0.4 mm, and the error with the real data was 0.3 mm.

  15. Metasecretome-selective phage display approach for mining the functional potential of a rumen microbial community.

    PubMed

    Ciric, Milica; Moon, Christina D; Leahy, Sinead C; Creevey, Christopher J; Altermann, Eric; Attwood, Graeme T; Rakonjac, Jasna; Gagic, Dragana

    2014-05-12

    In silico, secretome proteins can be predicted from completely sequenced genomes using various available algorithms that identify membrane-targeting sequences. For metasecretome (collection of surface, secreted and transmembrane proteins from environmental microbial communities) this approach is impractical, considering that the metasecretome open reading frames (ORFs) comprise only 10% to 30% of total metagenome, and are poorly represented in the dataset due to overall low coverage of metagenomic gene pool, even in large-scale projects. By combining secretome-selective phage display and next-generation sequencing, we focused the sequence analysis of complex rumen microbial community on the metasecretome component of the metagenome. This approach achieved high enrichment (29 fold) of secreted fibrolytic enzymes from the plant-adherent microbial community of the bovine rumen. In particular, we identified hundreds of heretofore rare modules belonging to cellulosomes, cell-surface complexes specialised for recognition and degradation of the plant fibre. As a method, metasecretome phage display combined with next-generation sequencing has a power to sample the diversity of low-abundance surface and secreted proteins that would otherwise require exceptionally large metagenomic sequencing projects. As a resource, metasecretome display library backed by the dataset obtained by next-generation sequencing is ready for i) affinity selection by standard phage display methodology and ii) easy purification of displayed proteins as part of the virion for individual functional analysis.

  16. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  17. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  18. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  19. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  20. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions

    NASA Astrophysics Data System (ADS)

    Song, Bongyong; Park, Justin C.; Song, William Y.

    2014-11-01

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  1. A low-complexity 2-point step size gradient projection method with selective function evaluations for smoothed total variation based CBCT reconstructions.

    PubMed

    Song, Bongyong; Park, Justin C; Song, William Y

    2014-11-07

    The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for the head-and-neck patient with only 180 projections, in 131.7 s, further supporting its clinical applicability.

  2. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers.

    PubMed

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-06-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate binding affinity prediction of peptides of length 8, 10 and 11. The method gives the opportunity to predict peptides with a different length than nine for MHC alleles where no such peptides have been measured. As validation, the performance of this approach is compared to predictors trained on peptides of the peptide length in question. In this validation, the approximation method has an accuracy that is comparable to or better than methods trained on a peptide length identical to the predicted peptides. The algorithm has been implemented in the web-accessible servers NetMHC-3.0: http://www.cbs.dtu.dk/services/NetMHC-3.0, and NetMHCpan-1.1: http://www.cbs.dtu.dk/services/NetMHCpan-1.1

  3. Comparison of sampling techniques for Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  4. In Silico Synthesis of Synthetic Receptors: A Polymerization Algorithm.

    PubMed

    Cowen, Todd; Busato, Mirko; Karim, Kal; Piletsky, Sergey A

    2016-12-01

    Molecularly imprinted polymer (MIP) synthetic receptors have proposed and applied applications in chemical extraction, sensors, assays, catalysis, targeted drug delivery, and direct inhibition of harmful chemicals and pathogens. However, they rely heavily on effective design for success. An algorithm has been written which mimics radical polymerization atomistically, accounting for chemical and spatial discrimination, hybridization, and geometric optimization. Synthetic ephedrine receptors were synthesized in silico to demonstrate the accuracy of the algorithm in reproducing polymers structures at the atomic level. Comparative analysis in the design of a synthetic ephedrine receptor demonstrates that the new method can effectively identify affinity trends and binding site selectivities where commonly used alternative methods cannot. This new method is believed to generate the most realistic models of MIPs thus produced. This suggests that the algorithm could be a powerful new tool in the design and analysis of various polymers, including MIPs, with significant implications in areas of biotechnology, biomimetics, and the materials sciences more generally. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Collaborative filtering recommendation model based on fuzzy clustering algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Ye; Zhang, Yunhua

    2018-05-01

    As one of the most widely used algorithms in recommender systems, collaborative filtering algorithm faces two serious problems, which are the sparsity of data and poor recommendation effect in big data environment. In traditional clustering analysis, the object is strictly divided into several classes and the boundary of this division is very clear. However, for most objects in real life, there is no strict definition of their forms and attributes of their class. Concerning the problems above, this paper proposes to improve the traditional collaborative filtering model through the hybrid optimization of implicit semantic algorithm and fuzzy clustering algorithm, meanwhile, cooperating with collaborative filtering algorithm. In this paper, the fuzzy clustering algorithm is introduced to fuzzy clustering the information of project attribute, which makes the project belong to different project categories with different membership degrees, and increases the density of data, effectively reduces the sparsity of data, and solves the problem of low accuracy which is resulted from the inaccuracy of similarity calculation. Finally, this paper carries out empirical analysis on the MovieLens dataset, and compares it with the traditional user-based collaborative filtering algorithm. The proposed algorithm has greatly improved the recommendation accuracy.

  6. Expansion-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1993-01-01

    A new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases is described. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they were used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts - as well as the other parameters - can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline - resulting in a proportionately higher depth accuracy.

  7. Expansion-based passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1993-01-01

    This paper describes a new technique of passive ranging which is based on utilizing the image-plane expansion experienced by every object as its distance from the sensor decreases. This technique belongs in the feature/object-based family. The motion and shape of a small window, assumed to be fully contained inside the boundaries of some object, is approximated by an affine transformation. The parameters of the transformation matrix are derived by initially comparing successive images, and progressively increasing the image time separation so as to achieve much larger triangulation baseline than currently possible. Depth is directly derived from the expansion part of the transformation. To a first approximation, image-plane expansion is independent of image-plane location with respect to the focus of expansion (FOE) and of platform maneuvers. Thus, an expansion-based method has the potential of providing a reliable range in the difficult image area around the FOE. In areas far from the FOE the shift parameters of the affine transformation can provide more accurate depth information than the expansion alone, and can thus be used similarly to the way they have been used in conjunction with the Inertial Navigation Unit (INU) and Kalman filtering. However, the performance of a shift-based algorithm, when the shifts are derived from the affine transformation, would be much improved compared to current algorithms because the shifts--as well as the other parameters--can be obtained between widely separated images. Thus, the main advantage of this new approach is that, allowing the tracked window to expand and rotate, in addition to moving laterally, enables one to correlate images over a very long time span which, in turn, translates into a large spatial baseline resulting in a proportionately higher depth accuracy.

  8. Modular Bundle Adjustment for Photogrammetric Computations

    NASA Astrophysics Data System (ADS)

    Börlin, N.; Murtiyoso, A.; Grussenmeyer, P.; Menna, F.; Nocerino, E.

    2018-05-01

    In this paper we investigate how the residuals in bundle adjustment can be split into a composition of simple functions. According to the chain rule, the Jacobian (linearisation) of the residual can be formed as a product of the Jacobians of the individual steps. When implemented, this enables a modularisation of the computation of the bundle adjustment residuals and Jacobians where each component has limited responsibility. This enables simple replacement of components to e.g. implement different projection or rotation models by exchanging a module. The technique has previously been used to implement bundle adjustment in the open-source package DBAT (Börlin and Grussenmeyer, 2013) based on the Photogrammetric and Computer Vision interpretations of Brown (1971) lens distortion model. In this paper, we applied the technique to investigate how affine distortions can be used to model the projection of a tilt-shift lens. Two extended distortion models were implemented to test the hypothesis that the ordering of the affine and lens distortion steps can be changed to reduce the size of the residuals of a tilt-shift lens calibration. Results on synthetic data confirm that the ordering of the affine and lens distortion steps matter and is detectable by DBAT. However, when applied to a real camera calibration data set of a tilt-shift lens, no difference between the extended models was seen. This suggests that the tested hypothesis is false and that other effects need to be modelled to better explain the projection. The relatively low implementation effort that was needed to generate the models suggest that the technique can be used to investigate other novel projection models in photogrammetry, including modelling changes in the 3D geometry to better understand the tilt-shift lens.

  9. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  10. Relative Binding Free Energy Calculations in Drug Discovery: Recent Advances and Practical Considerations.

    PubMed

    Cournia, Zoe; Allen, Bryce; Sherman, Woody

    2017-12-26

    Accurate in silico prediction of protein-ligand binding affinities has been a primary objective of structure-based drug design for decades due to the putative value it would bring to the drug discovery process. However, computational methods have historically failed to deliver value in real-world drug discovery applications due to a variety of scientific, technical, and practical challenges. Recently, a family of approaches commonly referred to as relative binding free energy (RBFE) calculations, which rely on physics-based molecular simulations and statistical mechanics, have shown promise in reliably generating accurate predictions in the context of drug discovery projects. This advance arises from accumulating developments in the underlying scientific methods (decades of research on force fields and sampling algorithms) coupled with vast increases in computational resources (graphics processing units and cloud infrastructures). Mounting evidence from retrospective validation studies, blind challenge predictions, and prospective applications suggests that RBFE simulations can now predict the affinity differences for congeneric ligands with sufficient accuracy and throughput to deliver considerable value in hit-to-lead and lead optimization efforts. Here, we present an overview of current RBFE implementations, highlighting recent advances and remaining challenges, along with examples that emphasize practical considerations for obtaining reliable RBFE results. We focus specifically on relative binding free energies because the calculations are less computationally intensive than absolute binding free energy (ABFE) calculations and map directly onto the hit-to-lead and lead optimization processes, where the prediction of relative binding energies between a reference molecule and new ideas (virtual molecules) can be used to prioritize molecules for synthesis. We describe the critical aspects of running RBFE calculations, from both theoretical and applied perspectives, using a combination of retrospective literature examples and prospective studies from drug discovery projects. This work is intended to provide a contemporary overview of the scientific, technical, and practical issues associated with running relative binding free energy simulations, with a focus on real-world drug discovery applications. We offer guidelines for improving the accuracy of RBFE simulations, especially for challenging cases, and emphasize unresolved issues that could be improved by further research in the field.

  11. Iterative projection algorithms for ab initio phasing in virus crystallography.

    PubMed

    Lo, Victor L; Kingston, Richard L; Millane, Rick P

    2016-12-01

    Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*

    PubMed Central

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-01-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122

  13. Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.

    PubMed

    Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2010-02-01

    The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.

  14. GPU-based Branchless Distance-Driven Projection and Backprojection

    PubMed Central

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-01-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480

  15. GPU-based Branchless Distance-Driven Projection and Backprojection.

    PubMed

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-12-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm.

  16. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    PubMed

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  17. A blind transform based approach for the detection of isolated astrophysical pulses

    NASA Astrophysics Data System (ADS)

    Alkhweldi, Marwan; Schmid, Natalia A.; Prestage, Richard M.

    2017-06-01

    This paper presents a blind algorithm for the automatic detection of isolated astrophysical pulses. The detection algorithm is applied to spectrograms (also known as "filter bank data" or "the (t,f) plane"). The detection algorithm comprises a sequence of three steps: (1) a Radon transform is applied to the spectrogram, (2) a Fourier transform is applied to each projection parametrized by an angle, and the total power in each projection is calculated, and (3) the total power of all projections above 90° is compared to the total power of all projections below 90° and a decision in favor of an astrophysical pulse present or absent is made. Once a pulse is detected, its Dispersion Measure (DM) is estimated by fitting an analytically developed expression for a transformed spectrogram containing a pulse, with varying value of DM, to the actual data. The performance of the proposed algorithm is numerically analyzed.

  18. A Novel Antibody Humanization Method Based on Epitopes Scanning and Molecular Dynamics Simulation

    PubMed Central

    Zhao, Bin-Bin; Gong, Lu-Lu; Jin, Wen-Jing; Liu, Jing-Jun; Wang, Jing-Fei; Wang, Tian-Tian; Yuan, Xiao-Hui; He, You-Wen

    2013-01-01

    1-17-2 is a rat anti-human DEC-205 monoclonal antibody that induces internalization and delivers antigen to dendritic cells (DCs). The potentially clinical application of this antibody is limited by its murine origin. Traditional humanization method such as complementarity determining regions (CDRs) graft often leads to a decreased or even lost affinity. Here we have developed a novel antibody humanization method based on computer modeling and bioinformatics analysis. First, we used homology modeling technology to build the precise model of Fab. A novel epitope scanning algorithm was designed to identify antigenic residues in the framework regions (FRs) that need to be mutated to human counterpart in the humanization process. Then virtual mutation and molecular dynamics (MD) simulation were used to assess the conformational impact imposed by all the mutations. By comparing the root-mean-square deviations (RMSDs) of CDRs, we found five key residues whose mutations would destroy the original conformation of CDRs. These residues need to be back-mutated to rescue the antibody binding affinity. Finally we constructed the antibodies in vitro and compared their binding affinity by flow cytometry and surface plasmon resonance (SPR) assay. The binding affinity of the refined humanized antibody was similar to that of the original rat antibody. Our results have established a novel method based on epitopes scanning and MD simulation for antibody humanization. PMID:24278299

  19. Algorithm of choosing type of mechanical assembly production of instrument making enterprises of Industry 4.0

    NASA Astrophysics Data System (ADS)

    Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.; Zharinov, O. O.

    2018-05-01

    The task of the algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is being studied. There is a comparison of two project algorithms for Industry 3.0 and Industry 4.0. The algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is based on the technological route analysis of the manufacturing process in a company equipped with cyber and physical systems. This algorithm may give some project solutions selected from the primary part or the auxiliary one of the production. The algorithm decisive rules are based on the optimal criterion.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  1. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  2. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selectedmore » from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.« less

  3. Computational identification of post-translational modification-based nuclear import regulations by characterizing nuclear localization signal-import receptor interaction.

    PubMed

    Lin, Jhih-Rong; Liu, Zhonghao; Hu, Jianjun

    2014-10-01

    The binding affinity between a nuclear localization signal (NLS) and its import receptor is closely related to corresponding nuclear import activity. PTM-based modulation of the NLS binding affinity to the import receptor is one of the most understood mechanisms to regulate nuclear import of proteins. However, identification of such regulation mechanisms is challenging due to the difficulty of assessing the impact of PTM on corresponding nuclear import activities. In this study we proposed NIpredict, an effective algorithm to predict nuclear import activity given its NLS, in which molecular interaction energy components (MIECs) were used to characterize the NLS-import receptor interaction, and the support vector regression machine (SVR) was used to learn the relationship between the characterized NLS-import receptor interaction and the corresponding nuclear import activity. Our experiments showed that nuclear import activity change due to NLS change could be accurately predicted by the NIpredict algorithm. Based on NIpredict, we developed a systematic framework to identify potential PTM-based nuclear import regulations for human and yeast nuclear proteins. Application of this approach has identified the potential nuclear import regulation mechanisms by phosphorylation of two nuclear proteins including SF1 and ORC6. © 2014 Wiley Periodicals, Inc.

  4. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    PubMed

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  5. Event-driven management algorithm of an Engineering documents circulation system

    NASA Astrophysics Data System (ADS)

    Kuzenkov, V.; Zebzeev, A.; Gromakov, E.

    2015-04-01

    Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.

  6. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.

  7. Infinitesimal deformations of Poisson bi-vectors using the Kontsevich graph calculus

    NASA Astrophysics Data System (ADS)

    Buring, Ricardo; Kiselev, Arthemy V.; Rutten, Nina

    2018-02-01

    Let \\mathscr{P} be a Poisson structure on a finite-dimensional affine real manifold. Can \\mathscr{P} be deformed in such a way that it stays Poisson? The language of Kontsevich graphs provides a universal approach - with respect to all affine Poisson manifolds - to finding a class of solutions to this deformation problem. For that reasoning, several types of graphs are needed. In this paper we outline the algorithms to generate those graphs. The graphs that encode deformations are classified by the number of internal vertices k; for k ≤ 4 we present all solutions of the deformation problem. For k ≥ 5, first reproducing the pentagon-wheel picture suggested at k = 6 by Kontsevich and Willwacher, we construct the heptagon-wheel cocycle that yields a new unique solution without 2-loops and tadpoles at k = 8.

  8. On Geometric and Algebraic Aspects of 3D Affine and Projective Structures from Perspective 2D Views

    DTIC Science & Technology

    1993-07-01

    June 1992. an ideal lint (has no image ,n /th rteal plane) and [5] O.D. Faugeras. Q.T. Luong, and S.J. Maybank . uhich maps non-collhnearpoints A. B.C...projection for tire. Italy, .lune 1992. any giei aftin( transformation of the plane. [6 O.D. Faugeras and S. Maybank . Motion from point matches

  9. A Research Module for the Organic Chemistry Laboratory: Multistep Synthesis of a Fluorous Dye Molecule

    PubMed Central

    2014-01-01

    A multi-session research-like module has been developed for use in the undergraduate organic teaching laboratory curriculum. Students are tasked with planning and executing the synthesis of a novel fluorous dye molecule and using it to explore a fluorous affinity chromatography separation technique, which is the first implementation of this technique in a teaching laboratory. Key elements of the project include gradually introducing students to the use of the chemical literature to facilitate their searching, as well as deliberate constraints designed to force them to think critically about reaction design and optimization in organic chemistry. The project also introduces students to some advanced laboratory practices such as Schlenk techniques, degassing of reaction mixtures, affinity chromatography, and microwave-assisted chemistry. This provides students a teaching laboratory experience that closely mirrors authentic synthetic organic chemistry practice in laboratories throughout the world. PMID:24501431

  10. A Research Module for the Organic Chemistry Laboratory: Multistep Synthesis of a Fluorous Dye Molecule.

    PubMed

    Slade, Michael C; Raker, Jeffrey R; Kobilka, Brandon; Pohl, Nicola L B

    2014-01-14

    A multi-session research-like module has been developed for use in the undergraduate organic teaching laboratory curriculum. Students are tasked with planning and executing the synthesis of a novel fluorous dye molecule and using it to explore a fluorous affinity chromatography separation technique, which is the first implementation of this technique in a teaching laboratory. Key elements of the project include gradually introducing students to the use of the chemical literature to facilitate their searching, as well as deliberate constraints designed to force them to think critically about reaction design and optimization in organic chemistry. The project also introduces students to some advanced laboratory practices such as Schlenk techniques, degassing of reaction mixtures, affinity chromatography, and microwave-assisted chemistry. This provides students a teaching laboratory experience that closely mirrors authentic synthetic organic chemistry practice in laboratories throughout the world.

  11. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  12. Three-Dimensional Weighting in Cone Beam FBP Reconstruction and Its Transformation Over Geometries.

    PubMed

    Tang, Shaojie; Huang, Kuidong; Cheng, Yunyong; Niu, Tianye; Tang, Xiangyang

    2018-06-01

    With substantially increased number of detector rows in multidetector CT (MDCT), axial scan with projection data acquired along a circular source trajectory has become the method-of-choice in increasing clinical applications. Recognizing the practical relevance of image reconstruction directly from the projection data acquired in the native cone beam (CB) geometry, especially in scenarios wherein the most achievable in-plane resolution is desirable, we present a three-dimensional (3-D) weighted CB-FBP algorithm in such geometry in this paper. We start the algorithm's derivation in the cone-parallel geometry. Via changing of variables, taking the Jacobian into account and making heuristic and empirical assumptions, we arrive at the formulas for 3-D weighted image reconstruction in the native CB geometry. Using the projection data simulated by computer and acquired by an MDCT scanner, we evaluate and verify performance of the proposed algorithm for image reconstruction directly from projection data acquired in the native CB geometry. The preliminary data show that the proposed algorithm performs as well as the 3-D weighted CB-FBP algorithm in the cone-parallel geometry. The proposed algorithm is anticipated to find its utility in extensive clinical and preclinical applications wherein the reconstruction of images in the native CB geometry, i.e., the geometry for data acquisition, is of relevance.

  13. Development of proteome-wide binding reagents for research and diagnostics.

    PubMed

    Taussig, Michael J; Schmidt, Ronny; Cook, Elizabeth A; Stoevesandt, Oda

    2013-12-01

    Alongside MS, antibodies and other specific protein-binding molecules have a special place in proteomics as affinity reagents in a toolbox of applications for determining protein location, quantitative distribution and function (affinity proteomics). The realisation that the range of research antibodies available, while apparently vast is nevertheless still very incomplete and frequently of uncertain quality, has stimulated projects with an objective of raising comprehensive, proteome-wide sets of protein binders. With progress in automation and throughput, a remarkable number of recent publications refer to the practical possibility of selecting binders to every protein encoded in the genome. Here we review the requirements of a pipeline of production of protein binders for the human proteome, including target prioritisation, antigen design, 'next generation' methods, databases and the approaches taken by ongoing projects in Europe and the USA. While the task of generating affinity reagents for all human proteins is complex and demanding, the benefits of well-characterised and quality-controlled pan-proteome binder resources for biomedical research, industry and life sciences in general would be enormous and justify the effort. Given the technical, personnel and financial resources needed to fulfil this aim, expansion of current efforts may best be addressed through large-scale international collaboration. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Privacy Preservation in Distributed Subgradient Optimization Algorithms.

    PubMed

    Lou, Youcheng; Yu, Lean; Wang, Shouyang; Yi, Peng

    2017-07-31

    In this paper, some privacy-preserving features for distributed subgradient optimization algorithms are considered. Most of the existing distributed algorithms focus mainly on the algorithm design and convergence analysis, but not the protection of agents' privacy. Privacy is becoming an increasingly important issue in applications involving sensitive information. In this paper, we first show that the distributed subgradient synchronous homogeneous-stepsize algorithm is not privacy preserving in the sense that the malicious agent can asymptotically discover other agents' subgradients by transmitting untrue estimates to its neighbors. Then a distributed subgradient asynchronous heterogeneous-stepsize projection algorithm is proposed and accordingly its convergence and optimality is established. In contrast to the synchronous homogeneous-stepsize algorithm, in the new algorithm agents make their optimization updates asynchronously with heterogeneous stepsizes. The introduced two mechanisms of projection operation and asynchronous heterogeneous-stepsize optimization can guarantee that agents' privacy can be effectively protected.

  15. 3-D shape estimation of DNA molecules from stereo cryo-electron micro-graphs using a projection-steerable snake.

    PubMed

    Jacob, Mathews; Blu, Thierry; Vaillant, Cedric; Maddocks, John H; Unser, Michael

    2006-01-01

    We introduce a three-dimensional (3-D) parametric active contour algorithm for the shape estimation of DNA molecules from stereo cryo-electron micrographs. We estimate the shape by matching the projections of a 3-D global shape model with the micrographs; we choose the global model as a 3-D filament with a B-spline skeleton and a specified radial profile. The active contour algorithm iteratively updates the B-spline coefficients, which requires us to evaluate the projections and match them with the micrographs at every iteration. Since the evaluation of the projections of the global model is computationally expensive, we propose a fast algorithm based on locally approximating it by elongated blob-like templates. We introduce the concept of projection-steerability and derive a projection-steerable elongated template. Since the two-dimensional projections of such a blob at any 3-D orientation can be expressed as a linear combination of a few basis functions, matching the projections of such a 3-D template involves evaluating a weighted sum of inner products between the basis functions and the micrographs. The weights are simple functions of the 3-D orientation and the inner-products are evaluated efficiently by separable filtering. We choose an internal energy term that penalizes the average curvature magnitude. Since the exact length of the DNA molecule is known a priori, we introduce a constraint energy term that forces the curve to have this specified length. The sum of these energies along with the image energy derived from the matching process is minimized using the conjugate gradients algorithm. We validate the algorithm using real, as well as simulated, data and show that it performs well.

  16. Detection with Enhanced Energy Windowing Phase I Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bass, David A.; Enders, Alexander L.

    2016-12-01

    This document reviews the progress of Phase I of the Detection with Enhanced Energy Windowing (DEEW) project. The DEEW project is the implementation of software incorporating an algorithm which reviews data generated by radiation portal monitors and utilizes advanced and novel techniques for detecting radiological and fissile material while not alarming on Naturally Occurring Radioactive Material. Independent testing indicated that the Enhanced Energy Windowing algorithm showed promise at reducing the probability of alarm in the stream of commerce compared to existing algorithms and other developmental algorithms, while still maintaining adequate sensitivity to threats. This document contains a brief description ofmore » the project, instructions for setting up and running the applications, and guidance to help make reviewing the output files and source code easier.« less

  17. Model-based sphere localization (MBSL) in x-ray projections

    NASA Astrophysics Data System (ADS)

    Sawall, Stefan; Maier, Joscha; Leinweber, Carsten; Funck, Carsten; Kuntz, Jan; Kachelrieß, Marc

    2017-08-01

    The detection of spherical markers in x-ray projections is an important task in a variety of applications, e.g. geometric calibration and detector distortion correction. Therein, the projection of the sphere center on the detector is of particular interest as the used spherical beads are no ideal point-like objects. Only few methods have been proposed to estimate this respective position on the detector with sufficient accuracy and surrogate positions, e.g. the center of gravity, are used, impairing the results of subsequent algorithms. We propose to estimate the projection of the sphere center on the detector using a simulation-based method matching an artificial projection to the actual measurement. The proposed algorithm intrinsically corrects for all polychromatic effects included in the measurement and absent in the simulation by a polynomial which is estimated simultaneously. Furthermore, neither the acquisition geometry nor any object properties besides the fact that the object is of spherical shape need to be known to find the center of the bead. It is shown by simulations that the algorithm estimates the center projection with an error of less than 1% of the detector pixel size in case of realistic noise levels and that the method is robust to the sphere material, sphere size, and acquisition parameters. A comparison to three reference methods using simulations and measurements indicates that the proposed method is an order of magnitude more accurate compared to these algorithms. The proposed method is an accurate algorithm to estimate the center of spherical markers in CT projections in the presence of polychromatic effects and noise.

  18. Inertial parameter identification using contact force information for an unknown object captured by a space manipulator

    NASA Astrophysics Data System (ADS)

    Chu, Zhongyi; Ma, Ye; Hou, Yueyang; Wang, Fengwen

    2017-02-01

    This paper presents a novel identification method for the intact inertial parameters of an unknown object in space captured by a manipulator in a space robotic system. With strong dynamic and kinematic coupling existing in the robotic system, the inertial parameter identification of the unknown object is essential for the ideal control strategy based on changes in the attitude and trajectory of the space robot via capturing operations. Conventional studies merely refer to the principle and theory of identification, and an error analysis process of identification is deficient for a practical scenario. To solve this issue, an analysis of the effect of errors on identification is illustrated first, and the accumulation of measurement or estimation errors causing poor identification precision is demonstrated. Meanwhile, a modified identification equation incorporating the contact force, as well as the force/torque of the end-effector, is proposed to weaken the accumulation of errors and improve the identification accuracy. Furthermore, considering a severe disturbance condition caused by various measured noises, the hybrid immune algorithm, Recursive Least Squares and Affine Projection Sign Algorithm (RLS-APSA), is employed to decode the modified identification equation to ensure a stable identification property. Finally, to verify the validity of the proposed identification method, the co-simulation of ADAMS-MATLAB is implemented by multi-degree of freedom models of a space robotic system, and the numerical results show a precise and stable identification performance, which is able to guarantee the execution of aerospace operations and prevent failed control strategies.

  19. Expressing the human proteome for affinity proteomics: optimising expression of soluble protein domains and in vivo biotinylation.

    PubMed

    Keates, Tracy; Cooper, Christopher D O; Savitsky, Pavel; Allerston, Charles K; Phillips, Claire; Hammarström, Martin; Daga, Neha; Berridge, Georgina; Mahajan, Pravin; Burgess-Brown, Nicola A; Müller, Susanne; Gräslund, Susanne; Gileadi, Opher

    2012-06-15

    The generation of affinity reagents to large numbers of human proteins depends on the ability to express the target proteins as high-quality antigens. The Structural Genomics Consortium (SGC) focuses on the production and structure determination of human proteins. In a 7-year period, the SGC has deposited crystal structures of >800 human protein domains, and has additionally expressed and purified a similar number of protein domains that have not yet been crystallised. The targets include a diversity of protein domains, with an attempt to provide high coverage of protein families. The family approach provides an excellent basis for characterising the selectivity of affinity reagents. We present a summary of the approaches used to generate purified human proteins or protein domains, a test case demonstrating the ability to rapidly generate new proteins, and an optimisation study on the modification of >70 proteins by biotinylation in vivo. These results provide a unique synergy between large-scale structural projects and the recent efforts to produce a wide coverage of affinity reagents to the human proteome. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. SAMPL4 & DOCK3.7: lessons for automated docking procedures

    NASA Astrophysics Data System (ADS)

    Coleman, Ryan G.; Sterling, Teague; Weiss, Dahlia R.

    2014-03-01

    The SAMPL4 challenges were used to test current automated methods for solvation energy, virtual screening, pose and affinity prediction of the molecular docking pipeline DOCK 3.7. Additionally, first-order models of binding affinity were proposed as milestones for any method predicting binding affinity. Several important discoveries about the molecular docking software were made during the challenge: (1) Solvation energies of ligands were five-fold worse than any other method used in SAMPL4, including methods that were similarly fast, (2) HIV Integrase is a challenging target, but automated docking on the correct allosteric site performed well in terms of virtual screening and pose prediction (compared to other methods) but affinity prediction, as expected, was very poor, (3) Molecular docking grid sizes can be very important, serious errors were discovered with default settings that have been adjusted for all future work. Overall, lessons from SAMPL4 suggest many changes to molecular docking tools, not just DOCK 3.7, that could improve the state of the art. Future difficulties and projects will be discussed.

  1. Expressing the human proteome for affinity proteomics: optimising expression of soluble protein domains and in vivo biotinylation

    PubMed Central

    Keates, Tracy; Cooper, Christopher D.O.; Savitsky, Pavel; Allerston, Charles K.; Phillips, Claire; Hammarström, Martin; Daga, Neha; Berridge, Georgina; Mahajan, Pravin; Burgess-Brown, Nicola A.; Müller, Susanne; Gräslund, Susanne; Gileadi, Opher

    2012-01-01

    The generation of affinity reagents to large numbers of human proteins depends on the ability to express the target proteins as high-quality antigens. The Structural Genomics Consortium (SGC) focuses on the production and structure determination of human proteins. In a 7-year period, the SGC has deposited crystal structures of >800 human protein domains, and has additionally expressed and purified a similar number of protein domains that have not yet been crystallised. The targets include a diversity of protein domains, with an attempt to provide high coverage of protein families. The family approach provides an excellent basis for characterising the selectivity of affinity reagents. We present a summary of the approaches used to generate purified human proteins or protein domains, a test case demonstrating the ability to rapidly generate new proteins, and an optimisation study on the modification of >70 proteins by biotinylation in vivo. These results provide a unique synergy between large-scale structural projects and the recent efforts to produce a wide coverage of affinity reagents to the human proteome. PMID:22027370

  2. Interior tomography in microscopic CT with image reconstruction constrained by full field of view scan at low spatial resolution

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang

    2018-04-01

    In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.

  3. The SAMPL4 host-guest blind prediction challenge: an overview.

    PubMed

    Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K

    2014-04-01

    Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.

  4. Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection

    PubMed Central

    Liu, Wenfen

    2017-01-01

    Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447

  5. Multi-period project portfolio selection under risk considerations and stochastic income

    NASA Astrophysics Data System (ADS)

    Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood

    2018-02-01

    This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.

  6. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    PubMed

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  7. Inherent smoothness of intensity patterns for intensity modulated radiation therapy generated by simultaneous projection algorithms

    NASA Astrophysics Data System (ADS)

    Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.

    2004-07-01

    The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.

  8. Automatic detection and measurement of viral replication compartments by ellipse adjustment

    PubMed Central

    Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel

    2016-01-01

    Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production. PMID:27819325

  9. Automatic detection and measurement of viral replication compartments by ellipse adjustment

    NASA Astrophysics Data System (ADS)

    Garcés, Yasel; Guerrero, Adán; Hidalgo, Paloma; López, Raul Eduardo; Wood, Christopher D.; Gonzalez, Ramón A.; Rendón-Mancha, Juan Manuel

    2016-11-01

    Viruses employ a variety of strategies to hijack cellular activities through the orchestrated recruitment of macromolecules to specific virus-induced cellular micro-environments. Adenoviruses (Ad) and other DNA viruses induce extensive reorganization of the cell nucleus and formation of nuclear Replication Compartments (RCs), where the viral genome is replicated and expressed. In this work an automatic algorithm designed for detection and segmentation of RCs using ellipses is presented. Unlike algorithms available in the literature, this approach is deterministic, automatic, and can adjust multiple RCs using ellipses. The proposed algorithm is non iterative, computationally efficient and is invariant to affine transformations. The method was validated over both synthetic images and more than 400 real images of Ad-infected cells at various timepoints of the viral replication cycle obtaining relevant information about the biogenesis of adenoviral RCs. As proof of concept the algorithm was then used to quantitatively compare RCs in cells infected with the adenovirus wild type or an adenovirus mutant that is null for expression of a viral protein that is known to affect activities associated with RCs that result in deficient viral progeny production.

  10. Algorithm for evaluating the effectiveness of a high-rise development project based on current yield

    NASA Astrophysics Data System (ADS)

    Soboleva, Elena

    2018-03-01

    The article is aimed at the issues of operational evaluation of development project efficiency in high-rise construction under the current economic conditions in Russia. The author touches the following issues: problems of implementing development projects, the influence of the operational evaluation quality of high-rise construction projects on general efficiency, assessing the influence of the project's external environment on the effectiveness of project activities under crisis conditions and the quality of project management. The article proposes the algorithm and the methodological approach to the quality management of the developer project efficiency based on operational evaluation of the current yield efficiency. The methodology for calculating the current efficiency of a development project for high-rise construction has been updated.

  11. A novel method for in silico identification of regulatory SNPs in human genome.

    PubMed

    Li, Rong; Zhong, Dexing; Liu, Ruiling; Lv, Hongqiang; Zhang, Xinman; Liu, Jun; Han, Jiuqiang

    2017-02-21

    Regulatory single nucleotide polymorphisms (rSNPs), kind of functional noncoding genetic variants, can affect gene expression in a regulatory way, and they are thought to be associated with increased susceptibilities to complex diseases. Here a novel computational approach to identify potential rSNPs is presented. Different from most other rSNPs finding methods which based on hypothesis that SNPs causing large allele-specific changes in transcription factor binding affinities are more likely to play regulatory functions, we use a set of documented experimentally verified rSNPs and nonfunctional background SNPs to train classifiers, so the discriminating features are found. To characterize variants, an extensive range of characteristics, such as sequence context, DNA structure and evolutionary conservation etc. are analyzed. Support vector machine is adopted to build the classifier model together with an ensemble method to deal with unbalanced data. 10-fold cross-validation result shows that our method can achieve accuracy with sensitivity of ~78% and specificity of ~82%. Furthermore, our method performances better than some other algorithms based on aforementioned hypothesis in handling false positives. The original data and the source matlab codes involved are available at https://sourceforge.net/projects/rsnppredict/. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Model and Algorithm for Substantiating Solutions for Organization of High-Rise Construction Project

    NASA Astrophysics Data System (ADS)

    Anisimov, Vladimir; Anisimov, Evgeniy; Chernysh, Anatoliy

    2018-03-01

    In the paper the models and the algorithm for the optimal plan formation for the organization of the material and logistical processes of the high-rise construction project and their financial support are developed. The model is based on the representation of the optimization procedure in the form of a non-linear problem of discrete programming, which consists in minimizing the execution time of a set of interrelated works by a limited number of partially interchangeable performers while limiting the total cost of performing the work. The proposed model and algorithm are the basis for creating specific organization management methodologies for the high-rise construction project.

  13. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  14. Impact of automatic adjustment of stimulation outputs on pacemaker longevity in a new dual-chamber pacing system.

    PubMed

    Brockes, Christiane; Rahn-Schönbeck, Mariette; Duru, Firat; Candinas, Reto; Turina, Marko

    2003-02-01

    Automatic capture verification using the Autocapture (AC) feature enabled by paced evoked response detection and delivery of high energy back-up pulses intends to increase patient safety. Furthermore, adjustment of stimulation outputs can save energy and potentially improve pacemaker (PM) longevity. The purpose of this study was to evaluate the theoretical longevity of a new dual chamber PM with the integrated AC feature (Affinity DR, St. Jude Medical) in comparison to the longevity of a previous model from the same manufacturer without AC (Trilogy DR). Affinity PMs were implanted in 16 patients and connected to a compatible lead with low polarization properties. AC was activated when the evoked response was significantly higher than the polarization voltage. Theoretical PM longevity was calculated with and without AC during follow-up. The measured and calculated values were compared to measurements in 19 patients, who consecutively received Trilogy-PMs during the same time period. In only one patient the evoked response was not adequate, and as a result, AC was not programmed. The calculated longevity of the Affinity-PMs 79 +/- 28 days after implantation was significantly higher in comparison to the Trilogy-PMs (Affinity-PMs: 8.9 +/- 1.2 years without and 9.5 +/- 1.1 years with AC; Trilogy-PMs: 6.5 +/- 0.8 years) (p < 0.005). The AC feature is an optional algorithm that can be activated in most patients and it significantly prolongs predicted battery longevity due to automatic adjustment of stimulation outputs.

  15. Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Velten, Andreas

    2014-11-01

    We discuss new approaches to analyze laser-gated viewing data for nonline-of-sight vision with a frame-to-frame back-projection as well as feature selection algorithms. Although first back-projection approaches use time transients for each pixel, our method has the ability to calculate the projection of imaging data on the voxel space for each frame. Further, different data analysis algorithms and their sequential application were studied with the aim of identifying and selecting signals from different target positions. A slight modification of commonly used filters leads to a powerful selection of local maximum values. It is demonstrated that the choice of the filter has an impact on the selectivity i.e., multiple target detection as well as on the localization precision.

  16. Motion and positional error correction for cone beam 3D-reconstruction with mobile C-arms.

    PubMed

    Bodensteiner, C; Darolti, C; Schumacher, H; Matthäus, L; Schweikard, A

    2007-01-01

    CT-images acquired by mobile C-arm devices can contain artefacts caused by positioning errors. We propose a data driven method based on iterative 3D-reconstruction and 2D/3D-registration to correct projection data inconsistencies. With a 2D/3D-registration algorithm, transformations are computed to align the acquired projection images to a previously reconstructed volume. In an iterative procedure, the reconstruction algorithm uses the results of the registration step. This algorithm also reduces small motion artefacts within 3D-reconstructions. Experiments with simulated projections from real patient data show the feasibility of the proposed method. In addition, experiments with real projection data acquired with an experimental robotised C-arm device have been performed with promising results.

  17. Bayesian reconstruction of projection reconstruction NMR (PR-NMR).

    PubMed

    Yoon, Ji Won

    2014-11-01

    Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. A Turn-Projected State-Based Conflict Resolution Algorithm

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Lewis, Timothy A.

    2013-01-01

    State-based conflict detection and resolution (CD&R) algorithms detect conflicts and resolve them on the basis on current state information without the use of additional intent information from aircraft flight plans. Therefore, the prediction of the trajectory of aircraft is based solely upon the position and velocity vectors of the traffic aircraft. Most CD&R algorithms project the traffic state using only the current state vectors. However, the past state vectors can be used to make a better prediction of the future trajectory of the traffic aircraft. This paper explores the idea of using past state vectors to detect traffic turns and resolve conflicts caused by these turns using a non-linear projection of the traffic state. A new algorithm based on this idea is presented and validated using a fast-time simulator developed for this study.

  19. Three-dimensional volume containing multiple two-dimensional information patterns

    NASA Astrophysics Data System (ADS)

    Nakayama, Hirotaka; Shiraki, Atsushi; Hirayama, Ryuji; Masuda, Nobuyuki; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2013-06-01

    We have developed an algorithm for recording multiple gradated two-dimensional projection patterns in a single three-dimensional object. When a single pattern is observed, information from the other patterns can be treated as background noise. The proposed algorithm has two important features: the number of patterns that can be recorded is theoretically infinite and no meaningful information can be seen outside of the projection directions. We confirmed the effectiveness of the proposed algorithm by performing numerical simulations of two laser crystals: an octagonal prism that contained four patterns in four projection directions and a dodecahedron that contained six patterns in six directions. We also fabricated and demonstrated an actual prototype laser crystal from a glass cube engraved by a laser beam. This algorithm has applications in various fields, including media art, digital signage, and encryption technology.

  20. Ubiquitinated Proteome: Ready for Global?*

    PubMed Central

    Shi, Yi; Xu, Ping; Qin, Jun

    2011-01-01

    Ubiquitin (Ub) is a small and highly conserved protein that can covalently modify protein substrates. Ubiquitination is one of the major post-translational modifications that regulate a broad spectrum of cellular functions. The advancement of mass spectrometers as well as the development of new affinity purification tools has greatly expedited proteome-wide analysis of several post-translational modifications (e.g. phosphorylation, glycosylation, and acetylation). In contrast, large-scale profiling of lysine ubiquitination remains a challenge. Most recently, new Ub affinity reagents such as Ub remnant antibody and tandem Ub binding domains have been developed, allowing for relatively large-scale detection of several hundreds of lysine ubiquitination events in human cells. Here we review different strategies for the identification of ubiquitination site and discuss several issues associated with data analysis. We suggest that careful interpretation and orthogonal confirmation of MS spectra is necessary to minimize false positive assignments by automatic searching algorithms. PMID:21339389

  1. Inherent limitations of probabilistic models for protein-DNA binding specificity

    PubMed Central

    Ruan, Shuxiang

    2017-01-01

    The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588

  2. Undergraduate computational physics projects on quantum computing

    NASA Astrophysics Data System (ADS)

    Candela, D.

    2015-08-01

    Computational projects on quantum computing suitable for students in a junior-level quantum mechanics course are described. In these projects students write their own programs to simulate quantum computers. Knowledge is assumed of introductory quantum mechanics through the properties of spin 1/2. Initial, more easily programmed projects treat the basics of quantum computation, quantum gates, and Grover's quantum search algorithm. These are followed by more advanced projects to increase the number of qubits and implement Shor's quantum factoring algorithm. The projects can be run on a typical laptop or desktop computer, using most programming languages. Supplementing resources available elsewhere, the projects are presented here in a self-contained format especially suitable for a short computational module for physics students.

  3. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  4. Fraction Reduction through Continued Fractions

    ERIC Educational Resources Information Center

    Carley, Holly

    2011-01-01

    This article presents a method of reducing fractions without factoring. The ideas presented may be useful as a project for motivated students in an undergraduate number theory course. The discussion is related to the Euclidean Algorithm and its variations may lead to projects or early examples involving efficiency of an algorithm.

  5. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  6. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143

  7. Extending Differential Fault Analysis to Dynamic S-Box Advanced Encryption Standard Implementations

    DTIC Science & Technology

    2014-09-18

    entropy . At the same time, researchers strive to enhance AES and mitigate these growing threats. This paper researches the extension of existing...the algorithm or use side channels to reduce entropy , such as Differential Fault Analysis (DFA). At the same time, continuing research strives to...the state matrix. The S-box is an 8-bit 16x16 table built from an affine transformation on multiplicative inverses which guarantees full permutation (S

  8. Automatic segmentation of mitochondria in EM data using pairwise affinity factorization and graph-based contour searching.

    PubMed

    Ghita, Ovidiu; Dietlmeier, Julia; Whelan, Paul F

    2014-10-01

    In this paper, we investigate the segmentation of closed contours in subcellular data using a framework that primarily combines the pairwise affinity grouping principles with a graph partitioning contour searching approach. One salient problem that precluded the application of these methods to large scale segmentation problems is the onerous computational complexity required to generate comprehensive representations that include all pairwise relationships between all pixels in the input data. To compensate for this problem, a practical solution is to reduce the complexity of the input data by applying an over-segmentation technique prior to the application of the computationally demanding strands of the segmentation process. This approach opens the opportunity to build specific shape and intensity models that can be successfully employed to extract the salient structures in the input image which are further processed to identify the cycles in an undirected graph. The proposed framework has been applied to the segmentation of mitochondria membranes in electron microscopy data which are characterized by low contrast and low signal-to-noise ratio. The algorithm has been quantitatively evaluated using two datasets where the segmentation results have been compared with the corresponding manual annotations. The performance of the proposed algorithm has been measured using standard metrics, such as precision and recall, and the experimental results indicate a high level of segmentation accuracy.

  9. Integral reinforcement learning for continuous-time input-affine nonlinear systems with simultaneous invariant explorations.

    PubMed

    Lee, Jae Young; Park, Jin Bae; Choi, Yoon Ho

    2015-05-01

    This paper focuses on a class of reinforcement learning (RL) algorithms, named integral RL (I-RL), that solve continuous-time (CT) nonlinear optimal control problems with input-affine system dynamics. First, we extend the concepts of exploration, integral temporal difference, and invariant admissibility to the target CT nonlinear system that is governed by a control policy plus a probing signal called an exploration. Then, we show input-to-state stability (ISS) and invariant admissibility of the closed-loop systems with the policies generated by integral policy iteration (I-PI) or invariantly admissible PI (IA-PI) method. Based on these, three online I-RL algorithms named explorized I-PI and integral Q -learning I, II are proposed, all of which generate the same convergent sequences as I-PI and IA-PI under the required excitation condition on the exploration. All the proposed methods are partially or completely model free, and can simultaneously explore the state space in a stable manner during the online learning processes. ISS, invariant admissibility, and convergence properties of the proposed methods are also investigated, and related with these, we show the design principles of the exploration for safe learning. Neural-network-based implementation methods for the proposed schemes are also presented in this paper. Finally, several numerical simulations are carried out to verify the effectiveness of the proposed methods.

  10. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm

    PubMed Central

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895

  11. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    PubMed

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  12. A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy

    PubMed Central

    Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.

    2017-01-01

    One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997

  13. Exact and approximate Fourier rebinning algorithms for the solution of the data truncation problem in 3-D PET.

    PubMed

    Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis

    2007-07-01

    This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.

  14. AFFINITY PURIFICATION OF PLASMID DNA BY TEMPERATURE-TRIGGERED PRECIPITATION. (R829606)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  15. Reconstruction of internal density distributions in porous bodies from laser ultrasonic data

    NASA Technical Reports Server (NTRS)

    Lu, Yichi; Goldman, Jeffrey A.; Wadley, Haydn N. G.

    1992-01-01

    It is presently shown that, for density-reconstruction problems in which information about the inhomogeneity is known a priori, the nonlinear least-squares algorithm yields satisfactory results on the basis of limited projection data. The back-projection algorithm, which obviates assumptions about the objective function to be reconstructed, does not recover the boundary of the inhomogeneity when the number of projections is limited and ray-bending is ignored.

  16. Construction project selection with the use of fuzzy preference relation

    NASA Astrophysics Data System (ADS)

    Ibadov, Nabi

    2016-06-01

    In the article, author describes the problem of the construction project variant selection during pre-investment phase. As a solution, the algorithm basing on fuzzy preference relation is presented. The article provides an example of the algorithm used for selection of the best variant for construction project. The choice is made basing on criteria such as: net present value (NPV), level of technological difficulty, financing possibilities, and level of organizational difficulty.

  17. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  18. Local respiratory motion correction for PET/CT imaging: Application to lung cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamare, F., E-mail: frederic.lamare@chu-bordeaux.fr; Fernandez, P.; Fayad, H.

    Purpose: Despite multiple methodologies already proposed to correct respiratory motion in the whole PET imaging field of view (FOV), such approaches have not found wide acceptance in clinical routine. An alternative can be the local respiratory motion correction (LRMC) of data corresponding to a given volume of interest (VOI: organ or tumor). Advantages of LRMC include the use of a simple motion model, faster execution times, and organ specific motion correction. The purpose of this study was to evaluate the performance of LMRC using various motion models for oncology (lung lesion) applications. Methods: Both simulated (NURBS based 4D cardiac-torso phantom)more » and clinical studies (six patients) were used in the evaluation of the proposed LRMC approach. PET data were acquired in list-mode and synchronized with respiration. The implemented approach consists first in defining a VOI on the reconstructed motion average image. Gated PET images of the VOI are subsequently reconstructed using only lines of response passing through the selected VOI and are used in combination with a center of gravity or an affine/elastic registration algorithm to derive the transformation maps corresponding to the respiration effects. Those are finally integrated in the reconstruction process to produce a motion free image over the lesion regions. Results: Although the center of gravity or affine algorithm achieved similar performance for individual lesion motion correction, the elastic model, applied either locally or to the whole FOV, led to an overall superior performance. The spatial tumor location was altered by 89% and 81% for the elastic model applied locally or to the whole FOV, respectively (compared to 44% and 39% for the center of gravity and affine models, respectively). This resulted in similar associated overall tumor volume changes of 84% and 80%, respectively (compared to 75% and 71% for the center of gravity and affine models, respectively). The application of the nonrigid deformation model in LRMC led to over an order of magnitude gain in computational efficiency of the correction relative to the application of the deformable model to the whole FOV. Conclusions: The results of this study support the use of LMRC as a flexible and efficient correction approach for respiratory motion effects for single lesions in the thoracic area.« less

  19. Investigation of frame-to-frame back projection and feature selection algorithms for non-line-of-sight laser gated viewing

    NASA Astrophysics Data System (ADS)

    Laurenzis, Martin; Velten, Andreas

    2014-10-01

    In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.

  20. Refraction law and Fermat principle: a project using the ant colony optimization algorithm for undergraduate students in physics

    NASA Astrophysics Data System (ADS)

    Vuong, Q. L.; Rigaut, C.; Gossuin, Y.

    2018-07-01

    A programming project for undergraduate students in physics is proposed in this work. Its goal is to check the Snell–Descartes law of refraction using the Fermat principle and the ant colony optimization algorithm. The project involves basic mathematics and physics and is adapted to students with basic programming skills. More advanced tools can be used (but are not mandatory) as parallelization or object-oriented programming, which makes the project also suitable for more experienced students. We propose two tests to validate the program. Our algorithm is able to find solutions which are close to the theoretical predictions. Two quantities are defined to study its convergence and the quality of the solutions. It is also shown that the choice of the values of the simulation parameters is important to efficiently obtain precise results.

  1. Information mining in weighted complex networks with nonlinear rating projection

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Zeng, An; Zhou, Mingyang; Mao, Rui; Wang, Bing-Hong

    2017-10-01

    Weighted rating networks are commonly used by e-commerce providers nowadays. In order to generate an objective ranking of online items' quality according to users' ratings, many sophisticated algorithms have been proposed in the complex networks domain. In this paper, instead of proposing new algorithms we focus on a more fundamental problem: the nonlinear rating projection. The basic idea is that even though the rating values given by users are linearly separated, the real preference of users to items between the different given values is nonlinear. We thus design an approach to project the original ratings of users to more representative values. This approach can be regarded as a data pretreatment method. Simulation in both artificial and real networks shows that the performance of the ranking algorithms can be improved when the projected ratings are used.

  2. CoMFA analyses of C-2 position salvinorin A analogs at the kappa-opioid receptor provides insights into epimer selectivity.

    PubMed

    McGovern, Donna L; Mosier, Philip D; Roth, Bryan L; Westkaemper, Richard B

    2010-04-01

    The highly potent and kappa-opioid (KOP) receptor-selective hallucinogen Salvinorin A and selected analogs have been analyzed using the 3D quantitative structure-affinity relationship technique Comparative Molecular Field Analysis (CoMFA) in an effort to derive a statistically significant and predictive model of salvinorin affinity at the KOP receptor and to provide additional statistical support for the validity of previously proposed structure-based interaction models. Two CoMFA models of Salvinorin A analogs substituted at the C-2 position are presented. Separate models were developed based on the radioligand used in the kappa-opioid binding assay, [(3)H]diprenorphine or [(125)I]6 beta-iodo-3,14-dihydroxy-17-cyclopropylmethyl-4,5 alpha-epoxymorphinan ([(125)I]IOXY). For each dataset, three methods of alignment were employed: a receptor-docked alignment derived from the structure-based docking algorithm GOLD, another from the ligand-based alignment algorithm FlexS, and a rigid realignment of the poses from the receptor-docked alignment. The receptor-docked alignment produced statistically superior results compared to either the FlexS alignment or the realignment in both datasets. The [(125)I]IOXY set (Model 1) and [(3)H]diprenorphine set (Model 2) gave q(2) values of 0.592 and 0.620, respectively, using the receptor-docked alignment, and both models produced similar CoMFA contour maps that reflected the stereoelectronic features of the receptor model from which they were derived. Each model gave significantly predictive CoMFA statistics (Model 1 PSET r(2)=0.833; Model 2 PSET r(2)=0.813). Based on the CoMFA contour maps, a binding mode was proposed for amine-containing Salvinorin A analogs that provides a rationale for the observation that the beta-epimers (R-configuration) of protonated amines at the C-2 position have a higher affinity than the corresponding alpha-epimers (S-configuration). (c) 2010. Published by Elsevier Inc.

  3. SU-E-J-109: Accurate Contour Transfer Between Different Image Modalities Using a Hybrid Deformable Image Registration and Fuzzy Connected Image Segmentation Method.

    PubMed

    Yang, C; Paulson, E; Li, X

    2012-06-01

    To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.

  4. DEGAS: Dynamic Exascale Global Address Space Programming Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demmel, James

    The Dynamic, Exascale Global Address Space programming environment (DEGAS) project will develop the next generation of programming models and runtime systems to meet the challenges of Exascale computing. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speed and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics. The Berkeley part of the project concentrated on communication-optimal code generation to optimize speedmore » and energy efficiency by reducing data movement. Our work developed communication lower bounds, and/or communication avoiding algorithms (that either meet the lower bound, or do much less communication than their conventional counterparts) for a variety of algorithms, including linear algebra, machine learning and genomics.« less

  5. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  6. A Novel Algorithm for Detecting Protein Complexes with the Breadth First Search

    PubMed Central

    Tang, Xiwei; Wang, Jianxin; Li, Min; He, Yiming; Pan, Yi

    2014-01-01

    Most biological processes are carried out by protein complexes. A substantial number of false positives of the protein-protein interaction (PPI) data can compromise the utility of the datasets for complexes reconstruction. In order to reduce the impact of such discrepancies, a number of data integration and affinity scoring schemes have been devised. The methods encode the reliabilities (confidence) of physical interactions between pairs of proteins. The challenge now is to identify novel and meaningful protein complexes from the weighted PPI network. To address this problem, a novel protein complex mining algorithm ClusterBFS (Cluster with Breadth-First Search) is proposed. Based on the weighted density, ClusterBFS detects protein complexes of the weighted network by the breadth first search algorithm, which originates from a given seed protein used as starting-point. The experimental results show that ClusterBFS performs significantly better than the other computational approaches in terms of the identification of protein complexes. PMID:24818139

  7. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  8. A basic analysis toolkit for biological sequences

    PubMed Central

    Giancarlo, Raffaele; Siragusa, Alessandro; Siragusa, Enrico; Utro, Filippo

    2007-01-01

    This paper presents a software library, nicknamed BATS, for some basic sequence analysis tasks. Namely, local alignments, via approximate string matching, and global alignments, via longest common subsequence and alignments with affine and concave gap cost functions. Moreover, it also supports filtering operations to select strings from a set and establish their statistical significance, via z-score computation. None of the algorithms is new, but although they are generally regarded as fundamental for sequence analysis, they have not been implemented in a single and consistent software package, as we do here. Therefore, our main contribution is to fill this gap between algorithmic theory and practice by providing an extensible and easy to use software library that includes algorithms for the mentioned string matching and alignment problems. The library consists of C/C++ library functions as well as Perl library functions. It can be interfaced with Bioperl and can also be used as a stand-alone system with a GUI. The software is available at under the GNU GPL. PMID:17877802

  9. The Texas Children's Medication Algorithm Project: Revision of the Algorithm for Pharmacotherapy of Attention-Deficit/Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Pliszka, Steven R.; Crismon, M. Lynn; Hughes, Carroll W.; Corners, C. Keith; Emslie, Graham J.; Jensen, Peter S.; McCracken, James T.; Swanson, James M.; Lopez, Molly

    2006-01-01

    Objective: In 1998, the Texas Department of Mental Health and Mental Retardation developed algorithms for medication treatment of attention-deficit/hyperactivity disorder (ADHD). Advances in the psychopharmacology of ADHD and results of a feasibility study of algorithm use in community mental health centers caused the algorithm to be modified and…

  10. Managing and learning with multiple models: Objectives and optimization algorithms

    USGS Publications Warehouse

    Probert, William J. M.; Hauser, C.E.; McDonald-Madden, E.; Runge, M.C.; Baxter, P.W.J.; Possingham, H.P.

    2011-01-01

    The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. ?? 2010 Elsevier Ltd.

  11. A framework for comparing different image segmentation methods and its use in studying equivalences between level set and fuzzy connectedness frameworks

    PubMed Central

    Ciesielski, Krzysztof Chris; Udupa, Jayaram K.

    2011-01-01

    In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm A should have a well defined continuous counterpart MA, referred to as its model, which constitutes an asymptotic of A when image resolution goes to infinity; (2) the equality of two such models MA and MA′ establishes a theoretical (asymptotic) equivalence of their digital counterparts A and A′. Such a comparison is of full theoretical value only when, for each involved algorithm A, its model MA is proved to be an asymptotic of A. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms. The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model M∇ is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity A∇. We also argue that, in a sense, M∇ is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided. PMID:21442014

  12. A reconstruction method for cone-beam differential x-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Velroyen, Astrid; Tan, Renbo; Zhang, Junwei; Chen, Liyuan; Tapfer, Arne; Bech, Martin; Pfeiffer, Franz

    2012-09-10

    Most existing differential phase-contrast computed tomography (DPC-CT) approaches are based on three kinds of scanning geometries, described by parallel-beam, fan-beam and cone-beam. Due to the potential of compact imaging systems with magnified spatial resolution, cone-beam DPC-CT has attracted significant interest. In this paper, we report a reconstruction method based on a back-projection filtration (BPF) algorithm for cone-beam DPC-CT. Due to the differential nature of phase contrast projections, the algorithm restrains from differentiation of the projection data prior to back-projection, unlike BPF algorithms commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured with a three-grating interferometer and a micro-focus x-ray tube source. Moreover, the numerical simulation and experimental results demonstrate that the proposed method can deal with several classes of truncated cone-beam datasets. We believe that this feature is of particular interest for future medical cone-beam phase-contrast CT imaging applications.

  13. Overview of fast algorithm in 3D dynamic holographic display

    NASA Astrophysics Data System (ADS)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  14. Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation

    NASA Astrophysics Data System (ADS)

    Niklasson, Gunnar A.; Niklasson, Maria H.

    2015-11-01

    The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.

  15. Genetic algorithm applied to the selection of factors in principal component-artificial neural networks: application to QSAR study of calcium channel antagonist activity of 1,4-dihydropyridines (nifedipine analogous).

    PubMed

    Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba

    2003-01-01

    A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.

  16. UltraTrack: Software for semi-automated tracking of muscle fascicles in sequences of B-mode ultrasound images.

    PubMed

    Farris, Dominic James; Lichtwark, Glen A

    2016-05-01

    Dynamic measurements of human muscle fascicle length from sequences of B-mode ultrasound images have become increasingly prevalent in biomedical research. Manual digitisation of these images is time consuming and algorithms for automating the process have been developed. Here we present a freely available software implementation of a previously validated algorithm for semi-automated tracking of muscle fascicle length in dynamic ultrasound image recordings, "UltraTrack". UltraTrack implements an affine extension to an optic flow algorithm to track movement of the muscle fascicle end-points throughout dynamically recorded sequences of images. The underlying algorithm has been previously described and its reliability tested, but here we present the software implementation with features for: tracking multiple fascicles in multiple muscles simultaneously; correcting temporal drift in measurements; manually adjusting tracking results; saving and re-loading of tracking results and loading a range of file formats. Two example runs of the software are presented detailing the tracking of fascicles from several lower limb muscles during a squatting and walking activity. We have presented a software implementation of a validated fascicle-tracking algorithm and made the source code and standalone versions freely available for download. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. A New Method of Synthetic Aperture Radar Image Reconstruction Using Modified Convolution Back-Projection Algorithm.

    DTIC Science & Technology

    1986-08-01

    SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTIONAVAILABILITY OF REPORT N/A \\pproved for public release, 21b. OECLASS FI) CAT ) ON/OOWNGRAOING SCMEOLLE...from this set of projections. The Convolution Back-Projection (CBP) algorithm is widely used technique in Computer Aide Tomography ( CAT ). In this work...University of Illinois at Urbana-Champaign. 1985 Ac % DTICEl_ FCTE " AUG 1 11986 Urbana. Illinois U,) I A NEW METHOD OF SYNTHETIC APERTURE RADAR IMAGE

  18. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  19. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    NASA Astrophysics Data System (ADS)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  20. Extended volume coverage in helical cone-beam CT by using PI-line based BPF algorithm

    NASA Astrophysics Data System (ADS)

    Cho, Seungryong; Pan, Xiaochuan

    2007-03-01

    We compared data requirements of filtered-backprojection (FBP) and backprojection-filtration (BPF) algorithms based on PI-lines in helical cone-beam CT. Since the filtration process in FBP algorithm needs all the projection data of PI-lines for each view, the required detector size should be bigger than the size that can cover Tam-Danielsson (T-D) window to avoid data truncation. BPF algorithm, however, requires the projection data only within the T-D window, which means smaller detector size can be used to reconstruct the same image than that in FBP. In other words, a longer helical pitch can be obtained by using BPF algorithm without any truncation artifacts when a fixed detector size is given. The purpose of the work is to demonstrate numerically that extended volume coverage in helical cone-beam CT by using PI-line-based BPF algorithm can be achieved.

  1. The importance of ray pathlengths when measuring objects in maximum intensity projection images.

    PubMed

    Schreiner, S; Dawant, B M; Paschal, C B; Galloway, R L

    1996-01-01

    It is important to understand any process that affects medical data. Once the data have changed from the original form, one must consider the possibility that the information contained in the data has also changed. In general, false negative and false positive diagnoses caused by this post-processing must be minimized. Medical imaging is one area in which post-processing is commonly performed, but there is often little or no discussion of how these algorithms affect the data. This study uncovers some interesting properties of maximum intensity projection (MIP) algorithms which are commonly used in the post-processing of magnetic resonance (MR) and computed tomography (CT) angiographic data. The appearance of the width of vessels and the extent of malformations such as aneurysms is of interest to clinicians. This study will show how MIP algorithms interact with the shape of the object being projected. MIP's can make objects appear thinner in the projection than in the original data set and also alter the shape of the profile of the object seen in the original data. These effects have consequences for width-measuring algorithms which will be discussed. Each projected intensity is dependent upon the pathlength of the ray from which the projected pixel arises. The morphology (shape and intensity profile) of an object will change the pathlength that each ray experiences. This is termed the pathlength effect. In order to demonstrate the pathlength effect, simple computer models of an imaged vessel were created. Additionally, a static MR phantom verified that the derived equation for the projection-plane probability density function (pdf) predicts the projection-plane intensities well (R(2)=0.96). Finally, examples of projections through in vivo MR angiography and CT angiography data are presented.

  2. US-VISIT Identity Matching Algorithm Evaluation Program: ADIS Algorithm Evaluation Project Plan Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, C W; Lenderman, J S; Gansemer, J D

    This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed bymore » Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).« less

  3. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  4. A Novel Latin Hypercube Algorithm via Translational Propagation

    PubMed Central

    Pan, Guang; Ye, Pengcheng

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844

  5. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  6. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  7. Model based LV-reconstruction in bi-plane x-ray angiography

    NASA Astrophysics Data System (ADS)

    Backfrieder, Werner; Carpella, Martin; Swoboda, Roland; Steinwender, Clemens; Gabriel, Christian; Leisch, Franz

    2005-04-01

    Interventional x-ray angiography is state of the art in diagnosis and therapy of severe diseases of the cardiovascular system. Diagnosis is based on contrast enhanced dynamic projection images of the left ventricle. A new model based algorithm for three dimensional reconstruction of the left ventricle from bi-planar angiograms was developed. Parametric super ellipses are deformed until their projection profiles optimally fit measured ventricular projections. Deformation is controlled by a simplex optimization procedure. A resulting optimized parameter set builds the initial guess for neighboring slices. A three dimensional surface model of the ventricle is built from stacked contours. The accuracy of the algorithm has been tested with mathematical phantom data and clinical data. Results show conformance with provided projection data and high convergence speed makes the algorithm useful for clinical application. Fully three dimensional reconstruction of the left ventricle has a high potential for improvements of clinical findings in interventional cardiology.

  8. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  9. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  10. Development of a new metal artifact reduction algorithm by using an edge preserving method for CBCT imaging

    NASA Astrophysics Data System (ADS)

    Kim, Juhye; Nam, Haewon; Lee, Rena

    2015-07-01

    CT (computed tomography) images, metal materials such as tooth supplements or surgical clips can cause metal artifact and degrade image quality. In severe cases, this may lead to misdiagnosis. In this research, we developed a new MAR (metal artifact reduction) algorithm by using an edge preserving filter and the MATLAB program (Mathworks, version R2012a). The proposed algorithm consists of 6 steps: image reconstruction from projection data, metal segmentation, forward projection, interpolation, applied edge preserving smoothing filter, and new image reconstruction. For an evaluation of the proposed algorithm, we obtained both numerical simulation data and data for a Rando phantom. In the numerical simulation data, four metal regions were added into the Shepp Logan phantom for metal artifacts. The projection data of the metal-inserted Rando phantom were obtained by using a prototype CBCT scanner manufactured by medical engineering and medical physics (MEMP) laboratory research group in medical science at Ewha Womans University. After these had been adopted the proposed algorithm was performed, and the result were compared with the original image (with metal artifact without correction) and with a corrected image based on linear interpolation. Both visual and quantitative evaluations were done. Compared with the original image with metal artifacts and with the image corrected by using linear interpolation, both the numerical and the experimental phantom data demonstrated that the proposed algorithm reduced the metal artifact. In conclusion, the evaluation in this research showed that the proposed algorithm outperformed the interpolation based MAR algorithm. If an optimization and a stability evaluation of the proposed algorithm can be performed, the developed algorithm is expected to be an effective tool for eliminating metal artifacts even in commercial CT systems.

  11. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less

  12. Development of a Tool for an Efficient Calibration of CORSIM Models

    DOT National Transportation Integrated Search

    2014-08-01

    This project proposes a Memetic Algorithm (MA) for the calibration of microscopic traffic flow simulation models. The proposed MA includes a combination of genetic and simulated annealing algorithms. The genetic algorithm performs the exploration of ...

  13. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  14. A quantitative comparison of two methods to correct eddy current-induced distortions in DT-MRI.

    PubMed

    Muñoz Maniega, Susana; Bastin, Mark E; Armitage, Paul A

    2007-04-01

    Eddy current-induced geometric distortions of single-shot, diffusion-weighted, echo-planar (DW-EP) images are a major confounding factor to the accurate determination of water diffusion parameters in diffusion tensor MRI (DT-MRI). Previously, it has been suggested that these geometric distortions can be removed from brain DW-EP images using affine transformations determined from phantom calibration experiments using iterative cross-correlation (ICC). Since this approach was first described, a number of image-based registration methods have become available that can also correct eddy current-induced distortions in DW-EP images. However, as yet no study has investigated whether separate eddy current calibration or image-based registration provides the most accurate way of removing these artefacts from DT-MRI data. Here we compare how ICC phantom calibration and affine FLIRT (http://www.fmrib.ox.ac.uk), a popular image-based multi-modal registration method that can correct both eddy current-induced distortions and bulk subject motion, perform when registering DW-EP images acquired with different slice thicknesses (2.8 and 5 mm) and b-values (1000 and 3000 s/mm(2)). With the use of consistency testing, it was found that ICC was a more robust algorithm for correcting eddy current-induced distortions than affine FLIRT, especially at high b-value and small slice thickness. In addition, principal component analysis demonstrated that the combination of ICC phantom calibration (to remove eddy current-induced distortions) with rigid body FLIRT (to remove bulk subject motion) provided a more accurate registration of DT-MRI data than that achieved by affine FLIRT.

  15. Axial 3D region of interest reconstruction using weighted cone beam BPF/DBPF algorithm cascaded with adequately oriented orthogonal butterfly filtering

    NASA Astrophysics Data System (ADS)

    Tang, Shaojie; Tang, Xiangyang

    2016-03-01

    Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).

  16. Viewing-zone control of integral imaging display using a directional projection and elemental image resizing method.

    PubMed

    Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam

    2013-10-01

    Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.

  17. ONE-STEP METAL-AFFINITY PURIFICATION OF HISTIDINE-TAGGED PROTEINS BY TEMPERATURE-TRIGGERED PRECIPITATION. (R829606)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  18. AGE-RELATED EFFECTS OF CHLORPYRIFOS ON HIGH AFFINITY CHOLINE UPTAKE IN RAT BRAIN. (R825811)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  19. OSPREY: protein design with ensembles, flexibility, and provable algorithms.

    PubMed

    Gainza, Pablo; Roberts, Kyle E; Georgiev, Ivelin; Lilien, Ryan H; Keedy, Daniel A; Chen, Cheng-Yu; Reza, Faisal; Anderson, Amy C; Richardson, David C; Richardson, Jane S; Donald, Bruce R

    2013-01-01

    We have developed a suite of protein redesign algorithms that improves realistic in silico modeling of proteins. These algorithms are based on three characteristics that make them unique: (1) improved flexibility of the protein backbone, protein side-chains, and ligand to accurately capture the conformational changes that are induced by mutations to the protein sequence; (2) modeling of proteins and ligands as ensembles of low-energy structures to better approximate binding affinity; and (3) a globally optimal protein design search, guaranteeing that the computational predictions are optimal with respect to the input model. Here, we illustrate the importance of these three characteristics. We then describe OSPREY, a protein redesign suite that implements our protein design algorithms. OSPREY has been used prospectively, with experimental validation, in several biomedically relevant settings. We show in detail how OSPREY has been used to predict resistance mutations and explain why improved flexibility, ensembles, and provability are essential for this application. OSPREY is free and open source under a Lesser GPL license. The latest version is OSPREY 2.0. The program, user manual, and source code are available at www.cs.duke.edu/donaldlab/software.php. osprey@cs.duke.edu. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. A robust correspondence matching algorithm of ground images along the optic axis

    NASA Astrophysics Data System (ADS)

    Jia, Fengman; Kang, Zhizhong

    2013-10-01

    Facing challenges of nontraditional geometry, multiple resolutions and the same features sensed from different angles, there are more difficulties of robust correspondence matching for ground images along the optic axis. A method combining SIFT algorithm and the geometric constraint of the ratio of coordinate differences between image point and image principal point is proposed in this paper. As it can provide robust matching across a substantial range of affine distortion addition of change in 3D viewpoint and noise, we use SIFT algorithm to tackle the problem of image distortion. By analyzing the nontraditional geometry of ground image along the optic axis, this paper derivates that for one correspondence pair, the ratio of distances between image point and image principal point in an image pair should be a value not far from 1. Therefore, a geometric constraint for gross points detection is formed. The proposed approach is tested with real image data acquired by Kodak. The results show that with SIFT and the proposed geometric constraint, the robustness of correspondence matching on the ground images along the optic axis can be effectively improved, and thus prove the validity of the proposed algorithm.

  1. Optimizing 4DCBCT projection allocation to respiratory bins.

    PubMed

    O'Brien, Ricky T; Kipritidis, John; Shieh, Chun-Chien; Keall, Paul J

    2014-10-07

    4D cone beam computed tomography (4DCBCT) is an emerging image guidance strategy used in radiotherapy where projections acquired during a scan are sorted into respiratory bins based on the respiratory phase or displacement. 4DCBCT reduces the motion blur caused by respiratory motion but increases streaking artefacts due to projection under-sampling as a result of the irregular nature of patient breathing and the binning algorithms used. For displacement binning the streak artefacts are so severe that displacement binning is rarely used clinically. The purpose of this study is to investigate if sharing projections between respiratory bins and adjusting the location of respiratory bins in an optimal manner can reduce or eliminate streak artefacts in 4DCBCT images. We introduce a mathematical optimization framework and a heuristic solution method, which we will call the optimized projection allocation algorithm, to determine where to position the respiratory bins and which projections to source from neighbouring respiratory bins. Five 4DCBCT datasets from three patients were used to reconstruct 4DCBCT images. Projections were sorted into respiratory bins using equispaced, equal density and optimized projection allocation. The standard deviation of the angular separation between projections was used to assess streaking and the consistency of the segmented volume of a fiducial gold marker was used to assess motion blur. The standard deviation of the angular separation between projections using displacement binning and optimized projection allocation was 30%-50% smaller than conventional phase based binning and 59%-76% smaller than conventional displacement binning indicating more uniformly spaced projections and fewer streaking artefacts. The standard deviation in the marker volume was 20%-90% smaller when using optimized projection allocation than using conventional phase based binning suggesting more uniform marker segmentation and less motion blur. Images reconstructed using displacement binning and the optimized projection allocation algorithm were clearer, contained visibly fewer streak artefacts and produced more consistent marker segmentation than those reconstructed with either equispaced or equal-density binning. The optimized projection allocation algorithm significantly improves image quality in 4DCBCT images and provides, for the first time, a method to consistently generate high quality displacement binned 4DCBCT images in clinical applications.

  2. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.

    PubMed

    Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.

  3. Software for Project-Based Learning of Robot Motion Planning

    ERIC Educational Resources Information Center

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-01-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can…

  4. A projected preconditioned conjugate gradient algorithm for computing many extreme eigenpairs of a Hermitian matrix [A projected preconditioned conjugate gradient algorithm for computing a large eigenspace of a Hermitian matrix

    DOE PAGES

    Vecharynski, Eugene; Yang, Chao; Pask, John E.

    2015-02-25

    Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less

  5. Research on cross - Project software defect prediction based on transfer learning

    NASA Astrophysics Data System (ADS)

    Chen, Ya; Ding, Xiaoming

    2018-04-01

    According to the two challenges in the prediction of cross-project software defects, the distribution differences between the source project and the target project dataset and the class imbalance in the dataset, proposing a cross-project software defect prediction method based on transfer learning, named NTrA. Firstly, solving the source project data's class imbalance based on the Augmented Neighborhood Cleaning Algorithm. Secondly, the data gravity method is used to give different weights on the basis of the attribute similarity of source project and target project data. Finally, a defect prediction model is constructed by using Trad boost algorithm. Experiments were conducted using data, come from NASA and SOFTLAB respectively, from a published PROMISE dataset. The results show that the method has achieved good values of recall and F-measure, and achieved good prediction results.

  6. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Park, Justin C.; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G.; Liu, Chihray; Lu, Bo

    2015-12-01

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm ‘the common mask guided image reconstruction’ (c-MGIR). In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and ‘well’ solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes. Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms. The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  7. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography.

    PubMed

    Park, Justin C; Zhang, Hao; Chen, Yunmei; Fan, Qiyong; Li, Jonathan G; Liu, Chihray; Lu, Bo

    2015-12-07

    Compared to 3D cone beam computed tomography (3D CBCT), the image quality of commercially available four-dimensional (4D) CBCT is severely impaired due to the insufficient amount of projection data available for each phase. Since the traditional Feldkamp-Davis-Kress (FDK)-based algorithm is infeasible for reconstructing high quality 4D CBCT images with limited projections, investigators had developed several compress-sensing (CS) based algorithms to improve image quality. The aim of this study is to develop a novel algorithm which can provide better image quality than the FDK and other CS based algorithms with limited projections. We named this algorithm 'the common mask guided image reconstruction' (c-MGIR).In c-MGIR, the unknown CBCT volume is mathematically modeled as a combination of phase-specific motion vectors and phase-independent static vectors. The common-mask matrix, which is the key concept behind the c-MGIR algorithm, separates the common static part across all phase images from the possible moving part in each phase image. The moving part and the static part of the volumes were then alternatively updated by solving two sub-minimization problems iteratively. As the novel mathematical transformation allows the static volume and moving volumes to be updated (during each iteration) with global projections and 'well' solved static volume respectively, the algorithm was able to reduce the noise and under-sampling artifact (an issue faced by other algorithms) to the maximum extent. To evaluate the performance of our proposed c-MGIR, we utilized imaging data from both numerical phantoms and a lung cancer patient. The qualities of the images reconstructed with c-MGIR were compared with (1) standard FDK algorithm, (2) conventional total variation (CTV) based algorithm, (3) prior image constrained compressed sensing (PICCS) algorithm, and (4) motion-map constrained image reconstruction (MCIR) algorithm, respectively. To improve the efficiency of the algorithm, the code was implemented with a graphic processing unit for parallel processing purposes.Root mean square error (RMSE) between the ground truth and reconstructed volumes of the numerical phantom were in the descending order of FDK, CTV, PICCS, MCIR, and c-MGIR for all phases. Specifically, the means and the standard deviations of the RMSE of FDK, CTV, PICCS, MCIR and c-MGIR for all phases were 42.64  ±  6.5%, 3.63  ±  0.83%, 1.31%  ±  0.09%, 0.86%  ±  0.11% and 0.52 %  ±  0.02%, respectively. The image quality of the patient case also indicated the superiority of c-MGIR compared to other algorithms.The results indicated that clinically viable 4D CBCT images can be reconstructed while requiring no more projection data than a typical clinical 3D CBCT scan. This makes c-MGIR a potential online reconstruction algorithm for 4D CBCT, which can provide much better image quality than other available algorithms, while requiring less dose and potentially less scanning time.

  8. SU-F-J-198: A Cross-Platform Adaptation of An a Priori Scatter Correction Algorithm for Cone-Beam Projections to Enable Image- and Dose-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, A; Casares-Magaz, O; Elstroem, U

    Purpose: Cone-beam CT (CBCT) imaging may enable image- and dose-guided proton therapy, but is challenged by image artefacts. The aim of this study was to demonstrate the general applicability of a previously developed a priori scatter correction algorithm to allow CBCT-based proton dose calculations. Methods: The a priori scatter correction algorithm used a plan CT (pCT) and raw cone-beam projections acquired with the Varian On-Board Imager. The projections were initially corrected for bow-tie filtering and beam hardening and subsequently reconstructed using the Feldkamp-Davis-Kress algorithm (rawCBCT). The rawCBCTs were intensity normalised before a rigid and deformable registration were applied on themore » pCTs to the rawCBCTs. The resulting images were forward projected onto the same angles as the raw CB projections. The two projections were subtracted from each other, Gaussian and median filtered, and then subtracted from the raw projections and finally reconstructed to the scatter-corrected CBCTs. For evaluation, water equivalent path length (WEPL) maps (from anterior to posterior) were calculated on different reconstructions of three data sets (CB projections and pCT) of three parts of an Alderson phantom. Finally, single beam spot scanning proton plans (0–360 deg gantry angle in steps of 5 deg; using PyTRiP) treating a 5 cm central spherical target in the pCT were re-calculated on scatter-corrected CBCTs with identical targets. Results: The scatter-corrected CBCTs resulted in sub-mm mean WEPL differences relative to the rigid registration of the pCT for all three data sets. These differences were considerably smaller than what was achieved with the regular Varian CBCT reconstruction algorithm (1–9 mm mean WEPL differences). Target coverage in the re-calculated plans was generally improved using the scatter-corrected CBCTs compared to the Varian CBCT reconstruction. Conclusion: We have demonstrated the general applicability of a priori CBCT scatter correction, potentially opening for CBCT-based image/dose-guided proton therapy, including adaptive strategies. Research agreement with Varian Medical Systems, not connected to the present project.« less

  9. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2011-01-01

    To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations were performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 degrees, respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78 +/- 0.57) mm or less. The theta and phi angle errors were found to be (5.7 +/- 4.9) degrees and (6.0 +/- 4.1) degrees, respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 degrees compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 degrees demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.

  10. Reconstruction of brachytherapy seed positions and orientations from cone-beam CT x-ray projections via a novel iterative forward projection matching method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2011-01-15

    Purpose: To generalize and experimentally validate a novel algorithm for reconstructing the 3D pose (position and orientation) of implanted brachytherapy seeds from a set of a few measured 2D cone-beam CT (CBCT) x-ray projections. Methods: The iterative forward projection matching (IFPM) algorithm was generalized to reconstruct the 3D pose, as well as the centroid, of brachytherapy seeds from three to ten measured 2D projections. The gIFPM algorithm finds the set of seed poses that minimizes the sum-of-squared-difference of the pixel-by-pixel intensities between computed and measured autosegmented radiographic projections of the implant. Numerical simulations of clinically realistic brachytherapy seed configurations weremore » performed to demonstrate the proof of principle. An in-house machined brachytherapy phantom, which supports precise specification of seed position and orientation at known values for simulated implant geometries, was used to experimentally validate this algorithm. The phantom was scanned on an ACUITY CBCT digital simulator over a full 660 sinogram projections. Three to ten x-ray images were selected from the full set of CBCT sinogram projections and postprocessed to create binary seed-only images. Results: In the numerical simulations, seed reconstruction position and orientation errors were approximately 0.6 mm and 5 deg., respectively. The physical phantom measurements demonstrated an absolute positional accuracy of (0.78{+-}0.57) mm or less. The {theta} and {phi} angle errors were found to be (5.7{+-}4.9) deg. and (6.0{+-}4.1) deg., respectively, or less when using three projections; with six projections, results were slightly better. The mean registration error was better than 1 mm/6 deg. compared to the measured seed projections. Each test trial converged in 10-20 iterations with computation time of 12-18 min/iteration on a 1 GHz processor. Conclusions: This work describes a novel, accurate, and completely automatic method for reconstructing seed orientations, as well as centroids, from a small number of radiographic projections, in support of intraoperative planning and adaptive replanning. Unlike standard back-projection methods, gIFPM avoids the need to match corresponding seed images on the projections. This algorithm also successfully reconstructs overlapping clustered and highly migrated seeds in the implant. The accuracy of better than 1 mm and 6 deg. demonstrates that gIFPM has the potential to support 2D Task Group 43 calculations in clinical practice.« less

  11. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements.

    PubMed

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K; Cai, Chang; Nagarajan, Srikantan S

    2018-06-01

    Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  12. Beamspace dual signal space projection (bDSSP): a method for selective detection of deep sources in MEG measurements

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Adachi, Yoshiaki; Kubota, Hiroshi K.; Cai, Chang; Nagarajan, Srikantan S.

    2018-06-01

    Objective. Magnetoencephalography (MEG) has a well-recognized weakness at detecting deeper brain activities. This paper proposes a novel algorithm for selective detection of deep sources by suppressing interference signals from superficial sources in MEG measurements. Approach. The proposed algorithm combines the beamspace preprocessing method with the dual signal space projection (DSSP) interference suppression method. A prerequisite of the proposed algorithm is prior knowledge of the location of the deep sources. The proposed algorithm first derives the basis vectors that span a local region just covering the locations of the deep sources. It then estimates the time-domain signal subspace of the superficial sources by using the projector composed of these basis vectors. Signals from the deep sources are extracted by projecting the row space of the data matrix onto the direction orthogonal to the signal subspace of the superficial sources. Main results. Compared with the previously proposed beamspace signal space separation (SSS) method, the proposed algorithm is capable of suppressing much stronger interference from superficial sources. This capability is demonstrated in our computer simulation as well as experiments using phantom data. Significance. The proposed bDSSP algorithm can be a powerful tool in studies of physiological functions of midbrain and deep brain structures.

  13. IN VITRO EFFECTS OF ORGANOPHOSPHORUS ANTICHOLINESTERASES AND MUSCARINIC AGONISTS ON RAT BRAIN SYNAPTOSOMAL HIGH AFFINITY CHOLINE UPTAKE. (R825811)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  14. Existing School Buildings: Incremental Seismic Retrofit Opportunities.

    ERIC Educational Resources Information Center

    Federal Emergency Management Agency, Washington, DC.

    The intent of this document is to provide technical guidance to school district facility managers for linking specific incremental seismic retrofit opportunities to specific maintenance and capital improvement projects. The linkages are based on logical affinities, such as technical fit, location of the work within the building, cost saving…

  15. mSciences: An Affinity Space for Science Teachers

    ERIC Educational Resources Information Center

    Mota, Jorge; Morais, Carla; Moreira, Luciano; Paiva, João C.

    2017-01-01

    The project "Multimedia in science teaching: five years of research and teaching in Portugal" was successful in featuring the national research on multimedia in science education and in providing the community with a simple reference tool--a repository of open access scientific texts. The current work aims to describe the theoretical…

  16. Student-Initiated Use of Facebook for Learning Online Journalism

    ERIC Educational Resources Information Center

    Song, Yang

    2017-01-01

    This article presents a case study of student-initiated use of Facebook Groups in doing a team project for an online journalism course. Drawing upon the concept of affinity space and a theoretical taxonomy of asynchronous online discussion, the present study triangulates classroom observation, semi-structured student interviews, and microanalysis…

  17. Short-Scan Fan-Beam Algorithms for Cr

    NASA Astrophysics Data System (ADS)

    Naparstek, Abraham

    1980-06-01

    Several short-scan reconstruction algorithms of the convolution type for fan-beam projections are presented and discussed. Their derivation fran new, exact integral representation formulas is outlined, and the performance of same of these algorithms is demonstrated with the aid of simulation results.

  18. Atlas-based segmentation of brainstem regions in neuromelanin-sensitive magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Puigvert, Marc; Castellanos, Gabriel; Uranga, Javier; Abad, Ricardo; Fernández-Seara, María. A.; Pastor, Pau; Pastor, María. A.; Muñoz-Barrutia, Arrate; Ortiz de Solórzano, Carlos

    2015-03-01

    We present a method for the automatic delineation of two neuromelanin rich brainstem structures -substantia nigra pars compacta (SN) and locus coeruleus (LC)- in neuromelanin sensitive magnetic resonance images of the brain. The segmentation method uses a dynamic multi-image reference atlas and a pre-registration atlas selection strategy. To create the atlas, a pool of 35 images of healthy subjects was pair-wise pre-registered and clustered in groups using an affinity propagation approach. Each group of the atlas is represented by a single exemplar image. Each new target image to be segmented is registered to the exemplars of each cluster. Then all the images of the highest performing clusters are enrolled into the final atlas, and the results of the registration with the target image are propagated using a majority voting approach. All registration processes used combined one two-stage affine and one elastic B-spline algorithm, to account for global positioning, region selection and local anatomic differences. In this paper, we present the algorithm, with emphasis in the atlas selection method and the registration scheme. We evaluate the performance of the atlas selection strategy using 35 healthy subjects and 5 Parkinson's disease patients. Then, we quantified the volume and contrast ratio of neuromelanin signal of these structures in 47 normal subjects and 40 Parkinson's disease patients to confirm that this method can detect neuromelanin-containing neurons loss in Parkinson's disease patients and could eventually be used for the early detection of SN and LC damage.

  19. Sampling probability distributions of lesions in mammograms

    NASA Astrophysics Data System (ADS)

    Looney, P.; Warren, L. M.; Dance, D. R.; Young, K. C.

    2015-03-01

    One approach to image perception studies in mammography using virtual clinical trials involves the insertion of simulated lesions into normal mammograms. To facilitate this, a method has been developed that allows for sampling of lesion positions across the cranio-caudal and medio-lateral radiographic projections in accordance with measured distributions of real lesion locations. 6825 mammograms from our mammography image database were segmented to find the breast outline. The outlines were averaged and smoothed to produce an average outline for each laterality and radiographic projection. Lesions in 3304 mammograms with malignant findings were mapped on to a standardised breast image corresponding to the average breast outline using piecewise affine transforms. A four dimensional probability distribution function was found from the lesion locations in the cranio-caudal and medio-lateral radiographic projections for calcification and noncalcification lesions. Lesion locations sampled from this probability distribution function were mapped on to individual mammograms using a piecewise affine transform which transforms the average outline to the outline of the breast in the mammogram. The four dimensional probability distribution function was validated by comparing it to the two dimensional distributions found by considering each radiographic projection and laterality independently. The correlation of the location of the lesions sampled from the four dimensional probability distribution function across radiographic projections was shown to match the correlation of the locations of the original mapped lesion locations. The current system has been implemented as a web-service on a server using the Python Django framework. The server performs the sampling, performs the mapping and returns the results in a javascript object notation format.

  20. Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces

    DTIC Science & Technology

    2011-02-28

    Final Report for AFOSR #FA9550-08-1-0422 Landscape Analysis and Algorithm Development for Plateau Plagued Search Spaces August 1, 2008 to November 30...focused on developing high level general purpose algorithms , such as Tabu Search and Genetic Algorithms . However, understanding of when and why these... algorithms perform well still lags. Our project extended the theory of certain combi- natorial optimization problems to develop analytical

  1. An improved non-uniformity correction algorithm and its hardware implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong

    2017-09-01

    The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.

  2. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  3. A Unified Satellite-Observation Polar Stratospheric Cloud (PSC) Database for Long-Term Climate-Change Studies

    NASA Technical Reports Server (NTRS)

    Fromm, Michael; Pitts, Michael; Alfred, Jerome

    2000-01-01

    This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.

  4. Parallel fuzzy connected image segmentation on GPU

    PubMed Central

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA’s compute unified device Architecture (cuda) platform for segmenting medical image data sets. Methods: In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as cuda kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Results: Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. Conclusions: The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set. PMID:21859037

  5. Parallel fuzzy connected image segmentation on GPU.

    PubMed

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  6. Free Energy Perturbation Calculations of the Thermodynamics of Protein Side-Chain Mutations.

    PubMed

    Steinbrecher, Thomas; Abel, Robert; Clark, Anthony; Friesner, Richard

    2017-04-07

    Protein side-chain mutation is fundamental both to natural evolutionary processes and to the engineering of protein therapeutics, which constitute an increasing fraction of important medications. Molecular simulation enables the prediction of the effects of mutation on properties such as binding affinity, secondary and tertiary structure, conformational dynamics, and thermal stability. A number of widely differing approaches have been applied to these predictions, including sequence-based algorithms, knowledge-based potential functions, and all-atom molecular mechanics calculations. Free energy perturbation theory, employing all-atom and explicit-solvent molecular dynamics simulations, is a rigorous physics-based approach for calculating thermodynamic effects of, for example, protein side-chain mutations. Over the past several years, we have initiated an investigation of the ability of our most recent free energy perturbation methodology to model the thermodynamics of protein mutation for two specific problems: protein-protein binding affinities and protein thermal stability. We highlight recent advances in the field and outline current and future challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.

    PubMed

    Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan

    2017-07-01

    In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).

  8. ASR-9 processor augmentation card (9-PAC) phase II scan-scan correlator algorithms

    DOT National Transportation Integrated Search

    2001-04-26

    The report documents the scan-scan correlator (tracker) algorithm developed for Phase II of the ASR-9 Processor Augmentation Card (9-PAC) project. The improved correlation and tracking algorithms in 9-PAC Phase II decrease the incidence of false-alar...

  9. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing

    PubMed Central

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-01-01

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855

  10. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Krauthammer, Prof. Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manuallymore » labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use.« less

  11. Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.

    PubMed

    Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang

    2018-02-15

    Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.

  12. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    NASA Astrophysics Data System (ADS)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  13. Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves

    PubMed Central

    Ye, Yangbo; Zhao, Shiying; Wang, Ge

    2006-01-01

    We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018

  14. Software Management Environment (SME): Components and algorithms

    NASA Technical Reports Server (NTRS)

    Hendrick, Robert; Kistler, David; Valett, Jon

    1994-01-01

    This document presents the components and algorithms of the Software Management Environment (SME), a management tool developed for the Software Engineering Branch (Code 552) of the Flight Dynamics Division (FDD) of the Goddard Space Flight Center (GSFC). The SME provides an integrated set of visually oriented experienced-based tools that can assist software development managers in managing and planning software development projects. This document describes and illustrates the analysis functions that underlie the SME's project monitoring, estimation, and planning tools. 'SME Components and Algorithms' is a companion reference to 'SME Concepts and Architecture' and 'Software Engineering Laboratory (SEL) Relationships, Models, and Management Rules.'

  15. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  16. TR-BREATH: Time-Reversal Breathing Rate Estimation and Detection.

    PubMed

    Chen, Chen; Han, Yi; Chen, Yan; Lai, Hung-Quoc; Zhang, Feng; Wang, Beibei; Liu, K J Ray

    2018-03-01

    In this paper, we introduce TR-BREATH, a time-reversal (TR)-based contact-free breathing monitoring system. It is capable of breathing detection and multiperson breathing rate estimation within a short period of time using off-the-shelf WiFi devices. The proposed system exploits the channel state information (CSI) to capture the miniature variations in the environment caused by breathing. To magnify the CSI variations, TR-BREATH projects CSIs into the TR resonating strength (TRRS) feature space and analyzes the TRRS by the Root-MUSIC and affinity propagation algorithms. Extensive experiment results indoor demonstrate a perfect detection rate of breathing. With only 10 s of measurement, a mean accuracy of can be obtained for single-person breathing rate estimation under the non-line-of-sight (NLOS) scenario. Furthermore, it achieves a mean accuracy of in breathing rate estimation for a dozen people under the line-of-sight scenario and a mean accuracy of in breathing rate estimation of nine people under the NLOS scenario, both with 63 s of measurement. Moreover, TR-BREATH can estimate the number of people with an error around 1. We also demonstrate that TR-BREATH is robust against packet loss and motions. With the prevailing of WiFi, TR-BREATH can be applied for in-home and real-time breathing monitoring.

  17. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    NASA Astrophysics Data System (ADS)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  18. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less

  19. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R.; Salmanpour, Aryan

    2013-12-15

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., themore » first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.« less

  20. Information technologies for taking into account risks in business development programme

    NASA Astrophysics Data System (ADS)

    Kalach, A. V.; Khasianov, R. R.; Rossikhina, L. V.; Zybin, D. G.; Melnik, A. A.

    2018-05-01

    The paper describes the information technologies for taking into account risks in business development programme, which rely on the algorithm for assessment of programme project risks and the algorithm of programme forming with constrained financing of high-risk projects taken into account. A method of lower-bound estimate is suggested for subsets of solutions. The corresponding theorem and lemma and their proofs are given.

  1. Non-Algorithmic Issues in Automated Computational Mechanics

    DTIC Science & Technology

    1991-04-30

    Tworzydlo, Senior Research Engineer and Manager of Advanced Projects Group I. Professor I J. T. Oden, President and Senior Scientist of COMCO, was project...practical applications of the systems reported so far is due to the extremely arduous and complex development and management of a realistic knowledge base...software, designed to effectively implement deep, algorithmic knowledge, * and 0 "intelligent" software, designed to manage shallow, heuristic

  2. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiu, Dongbin

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  3. Theory and Applications of Computational Time-Reversal Imaging

    DTIC Science & Technology

    2007-05-03

    experimental data collected by a research team from Carnegie Mellon University illustrating the use of the algorithms developed in the project. The final...2.1 Early Results from CMU experimental data ..... ................... 4 2.1.1 Basic Time Reversal Imaging ....... ...................... 4 2.1.2 Time... experimental data collected by Carnegie Mellon University illustrating the use of the algorithms developed in the project. 15. SUBJECT TERMS 16. SECURITY

  4. Litigated Metal Clusters - Structures, Energy and Reactivity

    DTIC Science & Technology

    2016-04-01

    projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate cross sections for elongated molecules...superposition approximation ( PSA ) is now complete. We have made it available free of charge to the scientific community on a dedicated website at UCSB. We...by AFOSR. We continued to improve the projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate

  5. Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization

    DTIC Science & Technology

    2010-03-31

    optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested

  6. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  7. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  8. Robust inverse-consistent affine CT-MR registration in MRI-assisted and MRI-alone prostate radiation therapy.

    PubMed

    Rivest-Hénault, David; Dowson, Nicholas; Greer, Peter B; Fripp, Jurgen; Dowling, Jason A

    2015-07-01

    CT-MR registration is a critical component of many radiation oncology protocols. In prostate external beam radiation therapy, it allows the propagation of MR-derived contours to reference CT images at the planning stage, and it enables dose mapping during dosimetry studies. The use of carefully registered CT-MR atlases allows the estimation of patient specific electron density maps from MRI scans, enabling MRI-alone radiation therapy planning and treatment adaptation. In all cases, the precision and accuracy achieved by registration influences the quality of the entire process. Most current registration algorithms do not robustly generalize and lack inverse-consistency, increasing the risk of human error and acting as a source of bias in studies where information is propagated in a particular direction, e.g. CT to MR or vice versa. In MRI-based treatment planning where both CT and MR scans serve as spatial references, inverse-consistency is critical, if under-acknowledged. A robust, inverse-consistent, rigid/affine registration algorithm that is well suited to CT-MR alignment in prostate radiation therapy is presented. The presented method is based on a robust block-matching optimization process that utilises a half-way space definition to maintain inverse-consistency. Inverse-consistency substantially reduces the influence of the order of input images, simplifying analysis, and increasing robustness. An open source implementation is available online at http://aehrc.github.io/Mirorr/. Experimental results on a challenging 35 CT-MR pelvis dataset demonstrate that the proposed method is more accurate than other popular registration packages and is at least as accurate as the state of the art, while being more robust and having an order of magnitude higher inverse-consistency than competing approaches. The presented results demonstrate that the proposed registration algorithm is readily applicable to prostate radiation therapy planning. Copyright © 2015. Published by Elsevier B.V.

  9. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  10. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  11. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org

    2015-10-15

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less

  12. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    PubMed

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  13. Poor textural image tie point matching via graph theory

    NASA Astrophysics Data System (ADS)

    Yuan, Xiuxiao; Chen, Shiyu; Yuan, Wei; Cai, Yang

    2017-07-01

    Feature matching aims to find corresponding points to serve as tie points between images. Robust matching is still a challenging task when input images are characterized by low contrast or contain repetitive patterns, occlusions, or homogeneous textures. In this paper, a novel feature matching algorithm based on graph theory is proposed. This algorithm integrates both geometric and radiometric constraints into an edge-weighted (EW) affinity tensor. Tie points are then obtained by high-order graph matching. Four pairs of poor textural images covering forests, deserts, bare lands, and urban areas are tested. For comparison, three state-of-the-art matching techniques, namely, scale-invariant feature transform (SIFT), speeded up robust features (SURF), and features from accelerated segment test (FAST), are also used. The experimental results show that the matching recall obtained by SIFT, SURF, and FAST varies from 0 to 35% in different types of poor textures. However, through the integration of both geometry and radiometry and the EW strategy, the recall obtained by the proposed algorithm is better than 50% in all four image pairs. The better matching recall improves the number of correct matches, dispersion, and positional accuracy.

  14. Improvement in Visual Target Tracking for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Madison, Richard

    2006-01-01

    In an improvement of the visual-target-tracking software used aboard a mobile robot (rover) of the type used to explore the Martian surface, an affine-matching algorithm has been replaced by a combination of a normalized- cross-correlation (NCC) algorithm and a template-image-magnification algorithm. Although neither NCC nor template-image magnification is new, the use of both of them to increase the degree of reliability with which features can be matched is new. In operation, a template image of a target is obtained from a previous rover position, then the magnification of the template image is based on the estimated change in the target distance from the previous rover position to the current rover position (see figure). For this purpose, the target distance at the previous rover position is determined by stereoscopy, while the target distance at the current rover position is calculated from an estimate of the current pose of the rover. The template image is then magnified by an amount corresponding to the estimated target distance to obtain a best template image to match with the image acquired at the current rover position.

  15. Volumetric display containing multiple two-dimensional color motion pictures

    NASA Astrophysics Data System (ADS)

    Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.

    2014-06-01

    We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.

  16. CUDA-based high-performance computing of the S-BPF algorithm with no-waiting pipelining

    NASA Astrophysics Data System (ADS)

    Deng, Lin; Yan, Bin; Chang, Qingmei; Han, Yu; Zhang, Xiang; Xi, Xiaoqi; Li, Lei

    2015-10-01

    The backprojection-filtration (BPF) algorithm has become a good solution for local reconstruction in cone-beam computed tomography (CBCT). However, the reconstruction speed of BPF is a severe limitation for clinical applications. The selective-backprojection filtration (S-BPF) algorithm is developed to improve the parallel performance of BPF by selective backprojection. Furthermore, the general-purpose graphics processing unit (GP-GPU) is a popular tool for accelerating the reconstruction. Much work has been performed aiming for the optimization of the cone-beam back-projection. As the cone-beam back-projection process becomes faster, the data transportation holds a much bigger time proportion in the reconstruction than before. This paper focuses on minimizing the total time in the reconstruction with the S-BPF algorithm by hiding the data transportation among hard disk, CPU and GPU. And based on the analysis of the S-BPF algorithm, some strategies are implemented: (1) the asynchronous calls are used to overlap the implemention of CPU and GPU, (2) an innovative strategy is applied to obtain the DBP image to hide the transport time effectively, (3) two streams for data transportation and calculation are synchronized by the cudaEvent in the inverse of finite Hilbert transform on GPU. Our main contribution is a smart reconstruction of the S-BPF algorithm with GPU's continuous calculation and no data transportation time cost. a 5123 volume is reconstructed in less than 0.7 second on a single Tesla-based K20 GPU from 182 views projection with 5122 pixel per projection. The time cost of our implementation is about a half of that without the overlap behavior.

  17. The application of new software tools to quantitative protein profiling via isotope-coded affinity tag (ICAT) and tandem mass spectrometry: I. Statistically annotated datasets for peptide sequences and proteins identified via the application of ICAT and tandem mass spectrometry to proteins copurifying with T cell lipid rafts.

    PubMed

    von Haller, Priska D; Yi, Eugene; Donohoe, Samuel; Vaughn, Kelly; Keller, Andrew; Nesvizhskii, Alexey I; Eng, Jimmy; Li, Xiao-jun; Goodlett, David R; Aebersold, Ruedi; Watts, Julian D

    2003-07-01

    Lipid rafts were prepared according to standard protocols from Jurkat T cells stimulated via T cell receptor/CD28 cross-linking and from control (unstimulated) cells. Co-isolating proteins from the control and stimulated cell preparations were labeled with isotopically normal (d0) and heavy (d8) versions of the same isotope-coded affinity tag (ICAT) reagent, respectively. Samples were combined, proteolyzed, and resultant peptides fractionated via cation exchange chromatography. Cysteine-containing (ICAT-labeled) peptides were recovered via the biotin tag component of the ICAT reagents by avidin-affinity chromatography. On-line micro-capillary liquid chromatography tandem mass spectrometry was performed on both avidin-affinity (ICAT-labeled) and flow-through (unlabeled) fractions. Initial peptide sequence identification was by searching recorded tandem mass spectrometry spectra against a human sequence data base using SEQUEST software. New statistical data modeling algorithms were then applied to the SEQUEST search results. These allowed for discrimination between likely "correct" and "incorrect" peptide assignments, and from these the inferred proteins that they collectively represented, by calculating estimated probabilities that each peptide assignment and subsequent protein identification was a member of the "correct" population. For convenience, the resultant lists of peptide sequences assigned and the proteins to which they corresponded were filtered at an arbitrarily set cut-off of 0.5 (i.e. 50% likely to be "correct") and above and compiled into two separate datasets. In total, these data sets contained 7667 individual peptide identifications, which represented 2669 unique peptide sequences, corresponding to 685 proteins and related protein groups.

  18. COMPARATIVE EFFECTS OF PARAOXON, CHLORPYRIFOS OXON AND MUSCARINIC AGONISTS ON HIGH AFFINITY CHOLINE UPTAKE IN RAT CORTICAL OR STRIATAL SYNAPTOSOMES. (R825811)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  19. The Art of Open Spaces: Contemporary Sea and Prairiescapes

    ERIC Educational Resources Information Center

    Schultz, Elizabeth

    2007-01-01

    The number of contemporary Kansas prairie artists whose works project an affinity for the sea continues to grow. This article focuses on six in particular: painters Robert Sudlow, Keith Jacobshagen, Lisa Grossman, and Louis Copt, and photographers Terry Evans and Larry Schwarm. Each of these Kansas-connected prairie artists has exhibited…

  20. Ecofeminism as a Pedagogical Project: Women, Nature, and Education

    ERIC Educational Resources Information Center

    Li, Huey-li

    2007-01-01

    In this essay, Huey-li Li inquires into how ecofeminist analyses of the woman-nature affinity call for the rectification of polarized conceptions of "nature." By attending to the interconnections between various forms of oppression, Li argues, ecofeminism sheds light on how gender ideology influences our worldview and the construction of…

  1. The Cutting Edge of Fun: Making Work Play at the New American School

    ERIC Educational Resources Information Center

    Sims, Christopher Otter

    2012-01-01

    This dissertation presents an ethnographic account of the launch of "The Downtown School for Design, Media, and Technology," one of the most prominent American school reform projects in recent years. Drawing on popular accounts about children and young people's pervasive affinity for digital media, and especially video games, the…

  2. Analysis of a new phase and height algorithm in phase measurement profilometry

    NASA Astrophysics Data System (ADS)

    Bian, Xintian; Zuo, Fen; Cheng, Ju

    2018-04-01

    Traditional phase measurement profilometry adopts divergent illumination to obtain the height distribution of a measured object accurately. However, the mapping relation between reference plane coordinates and phase distribution must be calculated before measurement. Data are then stored in a computer in the form of a data sheet for standby applications. This study improved the distribution of projected fringes and deducted the phase-height mapping algorithm when the two pupils of the projection and imaging systems are of unequal heights and when the projection and imaging axes are on different planes. With the algorithm, calculating the mapping relation between reference plane coordinates and phase distribution prior to measurement is unnecessary. Thus, the measurement process is simplified, and the construction of an experimental system is made easy. Computer simulation and experimental results confirm the effectiveness of the method.

  3. Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals

    NASA Astrophysics Data System (ADS)

    Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling

    2018-04-01

    An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.

  4. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  5. WE-AB-204-09: Respiratory Motion Correction in 4D-PET by Simultaneous Motion Estimation and Image Reconstruction (SMEIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantari, F; Wang, J; Li, T

    2015-06-15

    Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less

  6. Research On Vehicle-Based Driver Status/Performance Monitoring; Development, Validation, And Refinement Of Algorithms For Detection Of Driver Drowsiness, Final Report

    DOT National Transportation Integrated Search

    1994-12-01

    THIS REPORT SUMMARIZES THE RESULTS OF A 3-YEAR RESEARCH PROJECT TO DEVELOP RELIABLE ALGORITHMS FOR THE DETECTION OF MOTOR VEHICLE DRIVER IMPAIRMENT DUE TO DROWSINESS. THESE ALGORITHMS ARE BASED ON DRIVING PERFORMANCE MEASURES THAT CAN POTENTIALLY BE ...

  7. Tomography by iterative convolution - Empirical study and application to interferometry

    NASA Technical Reports Server (NTRS)

    Vest, C. M.; Prikryl, I.

    1984-01-01

    An algorithm for computer tomography has been developed that is applicable to reconstruction from data having incomplete projections because an opaque object blocks some of the probing radiation as it passes through the object field. The algorithm is based on iteration between the object domain and the projection (Radon transform) domain. Reconstructions are computed during each iteration by the well-known convolution method. Although it is demonstrated that this algorithm does not converge, an empirically justified criterion for terminating the iteration when the most accurate estimate has been computed is presented. The algorithm has been studied by using it to reconstruct several different object fields with several different opaque regions. It also has been used to reconstruct aerodynamic density fields from interferometric data recorded in wind tunnel tests.

  8. Towards the chemometric dissection of peptide - HLA-A*0201 binding affinity: comparison of local and global QSAR models

    NASA Astrophysics Data System (ADS)

    Doytchinova, Irini A.; Walshe, Valerie; Borrow, Persephone; Flower, Darren R.

    2005-03-01

    The affinities of 177 nonameric peptides binding to the HLA-A*0201 molecule were measured using a FACS-based MHC stabilisation assay and analysed using chemometrics. Their structures were described by global and local descriptors, QSAR models were derived by genetic algorithm, stepwise regression and PLS. The global molecular descriptors included molecular connectivity χ indices, κ shape indices, E-state indices, molecular properties like molecular weight and log P, and three-dimensional descriptors like polarizability, surface area and volume. The local descriptors were of two types. The first used a binary string to indicate the presence of each amino acid type at each position of the peptide. The second was also position-dependent but used five z-scales to describe the main physicochemical properties of the amino acids forming the peptides. The models were developed using a representative training set of 131 peptides and validated using an independent test set of 46 peptides. It was found that the global descriptors could not explain the variance in the training set nor predict the affinities of the test set accurately. Both types of local descriptors gave QSAR models with better explained variance and predictive ability. The results suggest that, in their interactions with the MHC molecule, the peptide acts as a complicated ensemble of multiple amino acids mutually potentiating each other.

  9. Automatic motion correction for in vivo human skin optical coherence tomography angiography through combined rigid and nonrigid registration

    NASA Astrophysics Data System (ADS)

    Wei, David Wei; Deegan, Anthony J.; Wang, Ruikang K.

    2017-06-01

    When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.

  10. Automatic motion correction for in vivo human skin optical coherence tomography angiography through combined rigid and nonrigid registration.

    PubMed

    Wei, David Wei; Deegan, Anthony J; Wang, Ruikang K

    2017-06-01

    When using optical coherence tomography angiography (OCTA), the development of artifacts due to involuntary movements can severely compromise the visualization and subsequent quantitation of tissue microvasculatures. To correct such an occurrence, we propose a motion compensation method to eliminate artifacts from human skin OCTA by means of step-by-step rigid affine registration, rigid subpixel registration, and nonrigid B-spline registration. To accommodate this remedial process, OCTA is conducted using two matching all-depth volume scans. Affine transformation is first performed on the large vessels of the deep reticular dermis, and then the resulting affine parameters are applied to all-depth vasculatures with a further subpixel registration to refine the alignment between superficial smaller vessels. Finally, the coregistration of both volumes is carried out to result in the final artifact-free composite image via an algorithm based upon cubic B-spline free-form deformation. We demonstrate that the proposed method can provide a considerable improvement to the final en face OCTA images with substantial artifact removal. In addition, the correlation coefficients and peak signal-to-noise ratios of the corrected images are evaluated and compared with those of the original images, further validating the effectiveness of the proposed method. We expect that the proposed method can be useful in improving qualitative and quantitative assessment of the OCTA images of scanned tissue beds.

  11. Physics-based scoring of protein-ligand interactions: explicit polarizability, quantum mechanics and free energies.

    PubMed

    Bryce, Richard A

    2011-04-01

    The ability to accurately predict the interaction of a ligand with its receptor is a key limitation in computer-aided drug design approaches such as virtual screening and de novo design. In this article, we examine current strategies for a physics-based approach to scoring of protein-ligand affinity, as well as outlining recent developments in force fields and quantum chemical techniques. We also consider advances in the development and application of simulation-based free energy methods to study protein-ligand interactions. Fuelled by recent advances in computational algorithms and hardware, there is the opportunity for increased integration of physics-based scoring approaches at earlier stages in computationally guided drug discovery. Specifically, we envisage increased use of implicit solvent models and simulation-based scoring methods as tools for computing the affinities of large virtual ligand libraries. Approaches based on end point simulations and reference potentials allow the application of more advanced potential energy functions to prediction of protein-ligand binding affinities. Comprehensive evaluation of polarizable force fields and quantum mechanical (QM)/molecular mechanical and QM methods in scoring of protein-ligand interactions is required, particularly in their ability to address challenging targets such as metalloproteins and other proteins that make highly polar interactions. Finally, we anticipate increasingly quantitative free energy perturbation and thermodynamic integration methods that are practical for optimization of hits obtained from screened ligand libraries.

  12. Biomacromolecular quantitative structure-activity relationship (BioQSAR): a proof-of-concept study on the modeling, prediction and interpretation of protein-protein binding affinity.

    PubMed

    Zhou, Peng; Wang, Congcong; Tian, Feifei; Ren, Yanrong; Yang, Chao; Huang, Jian

    2013-01-01

    Quantitative structure-activity relationship (QSAR), a regression modeling methodology that establishes statistical correlation between structure feature and apparent behavior for a series of congeneric molecules quantitatively, has been widely used to evaluate the activity, toxicity and property of various small-molecule compounds such as drugs, toxicants and surfactants. However, it is surprising to see that such useful technique has only very limited applications to biomacromolecules, albeit the solved 3D atom-resolution structures of proteins, nucleic acids and their complexes have accumulated rapidly in past decades. Here, we present a proof-of-concept paradigm for the modeling, prediction and interpretation of the binding affinity of 144 sequence-nonredundant, structure-available and affinity-known protein complexes (Kastritis et al. Protein Sci 20:482-491, 2011) using a biomacromolecular QSAR (BioQSAR) scheme. We demonstrate that the modeling performance and predictive power of BioQSAR are comparable to or even better than that of traditional knowledge-based strategies, mechanism-type methods and empirical scoring algorithms, while BioQSAR possesses certain additional features compared to the traditional methods, such as adaptability, interpretability, deep-validation and high-efficiency. The BioQSAR scheme could be readily modified to infer the biological behavior and functions of other biomacromolecules, if their X-ray crystal structures, NMR conformation assemblies or computationally modeled structures are available.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naseri, M; Rajabi, H; Wang, J

    Purpose: Respiration causes lesion smearing, image blurring and quality degradation, affecting lesion contrast and the ability to define correct lesion size. The spatial resolution of current multi pinhole SPECT (MPHS) scanners is sub-millimeter. Therefore, the effect of motion is more noticeable in comparison to conventional SPECT scanner. Gated imaging aims to reduce motion artifacts. A major issue in gating is the lack of statistics and individual reconstructed frames are noisy. The increased noise in each frame, deteriorates the quantitative accuracy of the MPHS Images. The objective of this work, is to enhance the image quality in 4D-MPHS imaging, by 4Dmore » image reconstruction. Methods: The new algorithm requires deformation vector fields (DVFs) that are calculated by non-rigid Demons registration. The algorithm is based on the motion-incorporated version of ordered subset expectation maximization (OSEM) algorithm. This iterative algorithm is capable to make full use of all projections to reconstruct each individual frame. To evaluate the performance of the proposed algorithm a simulation study was conducted. A fast ray tracing method was used to generate MPHS projections of a 4D digital mouse phantom with a small tumor in liver in eight different respiratory phases. To evaluate the 4D-OSEM algorithm potential, tumor to liver activity ratio was compared with other image reconstruction methods including 3D-MPHS and post reconstruction registered with Demons-derived DVFs. Results: Image quality of 4D-MPHS is greatly improved by the 4D-OSEM algorithm. When all projections are used to reconstruct a 3D-MPHS, motion blurring artifacts are present, leading to overestimation of the tumor size and 24% tumor contrast underestimation. This error reduced to 16% and 10% for post reconstruction registration methods and 4D-OSEM respectively. Conclusion: 4D-OSEM method can be used for motion correction in 4D-MPHS. The statistics and quantification are improved since all projection data are combined together to update the image.« less

  14. Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games

    DTIC Science & Technology

    2016-05-01

    Algorithm for Overcoming the Curse of Dimensionality for Certain Non-convex Hamilton-Jacobi Equations, Projections and Differential Games Yat Tin...subproblems. Our approach is expected to have wide applications in continuous dynamic games , control theory problems, and elsewhere. Mathematics...differential dynamic games , control theory problems, and dynamical systems coming from the physical world, e.g. [11]. An important application is to

  15. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation.

    PubMed

    Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa

    2010-02-21

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.

  16. A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction

    PubMed Central

    Fu, Fang; Zhang, Tao

    2016-01-01

    A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939

  17. Marginal semi-supervised sub-manifold projections with informative constraints for dimensionality reduction and recognition.

    PubMed

    Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S

    2012-12-01

    In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Measuring the self-similarity exponent in Lévy stable processes of financial time series

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.

    2013-11-01

    Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.

  19. Measurement technique for in situ characterizing aberrations of projection optics in lithographic tools.

    PubMed

    Wang, Fan; Wang, Xiangzhao; Ma, Mingying

    2006-08-20

    As the feature size decreases, degradation of image quality caused by wavefront aberrations of projection optics in lithographic tools has become a serious problem in the low-k1 process. We propose a novel measurement technique for in situ characterizing aberrations of projection optics in lithographic tools. Considering the impact of the partial coherence illumination, we introduce a novel algorithm that accurately describes the pattern displacement and focus shift induced by aberrations. Employing the algorithm, the measurement condition is extended from three-beam interference to two-, three-, and hybrid-beam interferences. The experiments are performed to measure the aberrations of projection optics in an ArF scanner.

  20. Lung texture in serial thoracic CT scans: Assessment of change introduced by image registration1

    PubMed Central

    Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Labby, Zacariah E.; Pelizzari, Charles A.; Straus, Christopher; Sensakovic, William F.; Ludwig, Michelle; Armato, Samuel G.

    2012-01-01

    Purpose: The aim of this study was to quantify the effect of four image registration methods on lung texture features extracted from serial computed tomography (CT) scans obtained from healthy human subjects. Methods: Two chest CT scans acquired at different time points were collected retrospectively for each of 27 patients. Following automated lung segmentation, each follow-up CT scan was registered to the baseline scan using four algorithms: (1) rigid, (2) affine, (3) B-splines deformable, and (4) demons deformable. The registration accuracy for each scan pair was evaluated by measuring the Euclidean distance between 150 identified landmarks. On average, 1432 spatially matched 32 × 32-pixel region-of-interest (ROI) pairs were automatically extracted from each scan pair. First-order, fractal, Fourier, Laws’ filter, and gray-level co-occurrence matrix texture features were calculated in each ROI, for a total of 140 features. Agreement between baseline and follow-up scan ROI feature values was assessed by Bland–Altman analysis for each feature; the range spanned by the 95% limits of agreement of feature value differences was calculated and normalized by the average feature value to obtain the normalized range of agreement (nRoA). Features with small nRoA were considered “registration-stable.” The normalized bias for each feature was calculated from the feature value differences between baseline and follow-up scans averaged across all ROIs in every patient. Because patients had “normal” chest CT scans, minimal change in texture feature values between scan pairs was anticipated, with the expectation of small bias and narrow limits of agreement. Results: Registration with demons reduced the Euclidean distance between landmarks such that only 9% of landmarks were separated by ≥1 mm, compared with rigid (98%), affine (95%), and B-splines (90%). Ninety-nine of the 140 (71%) features analyzed yielded nRoA > 50% for all registration methods, indicating that the majority of feature values were perturbed following registration. Nineteen of the features (14%) had nRoA < 15% following demons registration, indicating relative feature value stability. Student's t-tests showed that the nRoA of these 19 features was significantly larger when rigid, affine, or B-splines registration methods were used compared with demons registration. Demons registration yielded greater normalized bias in feature value change than B-splines registration, though this difference was not significant (p = 0.15). Conclusions: Demons registration provided higher spatial accuracy between matched anatomic landmarks in serial CT scans than rigid, affine, or B-splines algorithms. Texture feature changes calculated in healthy lung tissue from serial CT scans were smaller following demons registration compared with all other algorithms. Though registration altered the values of the majority of texture features, 19 features remained relatively stable after demons registration, indicating their potential for detecting pathologic change in serial CT scans. Combined use of accurate deformable registration using demons and texture analysis may allow for quantitative evaluation of local changes in lung tissue due to disease progression or treatment response. PMID:22894392

  1. Evaluating ACLS Algorithms for the International Space Station (ISS) - A Paradigm Revisited

    NASA Technical Reports Server (NTRS)

    Alexander, Dave; Brandt, Keith; Locke, James; Hurst, Victor, IV; Mack, Michael D.; Pettys, Marianne; Smart, Kieran

    2007-01-01

    The ISS may have communication gaps of up to 45 minutes during each orbit and therefore it is imperative to have medical protocols, including an effective ACLS algorithm, that can be reliably autonomously executed during flight. The aim of this project was to compare the effectiveness of the current ACLS algorithm with an improved algorithm having a new navigation format.

  2. Web Platform Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulsworth, Ashley; Kurtz, Jim; Brun de Pontet, Stephanie

    Sunvestment Energy Group (previously called Sunvestment Group) was established to create a web application that brings together site hosts, those who will obtain the energy from the solar array, with project developers and funders, including affinity investors. Sunvestment Energy Group (SEG) uses a community-based model that engages with investors who have some affinity with the site host organization. In addition to a financial return, these investors receive non-financial value from their investments and are therefore willing to offer lower cost capital. This enables the site host to enjoy more savings from solar through these less expensive Community Power Purchase Agreementsmore » (CPPAs). The purpose of this award was to develop an online platform to bring site hosts and investors together virtually.« less

  3. Monochromatic-beam-based dynamic X-ray microtomography based on OSEM-TV algorithm.

    PubMed

    Xu, Liang; Chen, Rongchang; Yang, Yiming; Deng, Biao; Du, Guohao; Xie, Honglan; Xiao, Tiqiao

    2017-01-01

    Monochromatic-beam-based dynamic X-ray computed microtomography (CT) was developed to observe evolution of microstructure inside samples. However, the low flux density results in low efficiency in data collection. To increase efficiency, reducing the number of projections should be a practical solution. However, it has disadvantages of low image reconstruction quality using the traditional filtered back projection (FBP) algorithm. In this study, an iterative reconstruction method using an ordered subset expectation maximization-total variation (OSEM-TV) algorithm was employed to address and solve this problem. The simulated results demonstrated that normalized mean square error of the image slices reconstructed by the OSEM-TV algorithm was about 1/4 of that by FBP. Experimental results also demonstrated that the density resolution of OSEM-TV was high enough to resolve different materials with the number of projections less than 100. As a result, with the introduction of OSEM-TV, the monochromatic-beam-based dynamic X-ray microtomography is potentially practicable for the quantitative and non-destructive analysis to the evolution of microstructure with acceptable efficiency in data collection and reconstructed image quality.

  4. The McGill Interactive Pediatric OncoGenetic Guidelines: An approach to identifying pediatric oncology patients most likely to benefit from a genetic evaluation.

    PubMed

    Goudie, Catherine; Coltin, Hallie; Witkowski, Leora; Mourad, Stephanie; Malkin, David; Foulkes, William D

    2017-08-01

    Identifying cancer predisposition syndromes in children with tumors is crucial, yet few clinical guidelines exist to identify children at high risk of having germline mutations. The McGill Interactive Pediatric OncoGenetic Guidelines project aims to create a validated pediatric guideline in the form of a smartphone/tablet application using algorithms to process clinical data and help determine whether to refer a child for genetic assessment. This paper discusses the initial stages of the project, focusing on its overall structure, the methodology underpinning the algorithms, and the upcoming algorithm validation process. © 2017 Wiley Periodicals, Inc.

  5. Investigation of contrast-enhanced subtracted breast CT images with MAP-EM based on projection-based weighting imaging.

    PubMed

    Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo

    2018-06-01

    Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.

  6. Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M.

    2011-01-01

    The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.

  7. Binding site and affinity prediction of general anesthetics to protein targets using docking.

    PubMed

    Liu, Renyu; Perez-Aguilar, Jose Manuel; Liang, David; Saven, Jeffery G

    2012-05-01

    The protein targets for general anesthetics remain unclear. A tool to predict anesthetic binding for potential binding targets is needed. In this study, we explored whether a computational method, AutoDock, could serve as such a tool. High-resolution crystal data of water-soluble proteins (cytochrome C, apoferritin, and human serum albumin), and a membrane protein (a pentameric ligand-gated ion channel from Gloeobacter violaceus [GLIC]) were used. Isothermal titration calorimetry (ITC) experiments were performed to determine anesthetic affinity in solution conditions for apoferritin. Docking calculations were performed using DockingServer with the Lamarckian genetic algorithm and the Solis and Wets local search method (http://www.dockingserver.com/web). Twenty general anesthetics were docked into apoferritin. The predicted binding constants were compared with those obtained from ITC experiments for potential correlations. In the case of apoferritin, details of the binding site and their interactions were compared with recent cocrystallization data. Docking calculations for 6 general anesthetics currently used in clinical settings (isoflurane, sevoflurane, desflurane, halothane, propofol, and etomidate) with known 50% effective concentration (EC(50)) values were also performed in all tested proteins. The binding constants derived from docking experiments were compared with known EC(50) values and octanol/water partition coefficients for the 6 general anesthetics. All 20 general anesthetics docked unambiguously into the anesthetic binding site identified in the crystal structure of apoferritin. The binding constants for 20 anesthetics obtained from the docking calculations correlate significantly with those obtained from ITC experiments (P = 0.04). In the case of GLIC, the identified anesthetic binding sites in the crystal structure are among the docking predicted binding sites, but not the top ranked site. Docking calculations suggest a most probable binding site located in the extracellular domain of GLIC. The predicted affinities correlated significantly with the known EC(50) values for the 6 frequently used anesthetics in GLIC for the site identified in the experimental crystal data (P = 0.006). However, predicted affinities in apoferritin, human serum albumin, and cytochrome C did not correlate with these 6 anesthetics' known experimental EC(50) values. A weak correlation between the predicted affinities and the octanol/water partition coefficients was observed for the sites in GLIC. We demonstrated that anesthetic binding sites and relative affinities can be predicted using docking calculations in an automatic docking server (AutoDock) for both water-soluble and membrane proteins. Correlation of predicted affinity and EC(50) for 6 frequently used general anesthetics was only observed in GLIC, a member of a protein family relevant to anesthetic mechanism.

  8. Binding Site and Affinity Prediction of General Anesthetics to Protein Targets Using Docking

    PubMed Central

    Liu, Renyu; Perez-Aguilar, Jose Manuel; Liang, David; Saven, Jeffery G.

    2012-01-01

    Background The protein targets for general anesthetics remain unclear. A tool to predict anesthetic binding for potential binding targets is needed. In this study, we explore whether a computational method, AutoDock, could serve as such a tool. Methods High-resolution crystal data of water soluble proteins (cytochrome C, apoferritin and human serum albumin), and a membrane protein (a pentameric ligand-gated ion channel from Gloeobacter violaceus, GLIC) were used. Isothermal titration calorimetry (ITC) experiments were performed to determine anesthetic affinity in solution conditions for apoferritin. Docking calculations were performed using DockingServer with the Lamarckian genetic algorithm and the Solis and Wets local search method (https://www.dockingserver.com/web). Twenty general anesthetics were docked into apoferritin. The predicted binding constants are compared with those obtained from ITC experiments for potential correlations. In the case of apoferritin, details of the binding site and their interactions were compared with recent co-crystallization data. Docking calculations for six general anesthetics currently used in clinical settings (isoflurane, sevoflurane, desflurane, halothane, propofol, and etomidate) with known EC50 were also performed in all tested proteins. The binding constants derived from docking experiments were compared with known EC50s and octanol/water partition coefficients for the six general anesthetics. Results All 20 general anesthetics docked unambiguously into the anesthetic binding site identified in the crystal structure of apoferritin. The binding constants for 20 anesthetics obtained from the docking calculations correlate significantly with those obtained from ITC experiments (p=0.04). In the case of GLIC, the identified anesthetic binding sites in the crystal structure are among the docking predicted binding sites, but not the top ranked site. Docking calculations suggest a most probable binding site located in the extracellular domain of GLIC. The predicted affinities correlated significantly with the known EC50s for the six commonly used anesthetics in GLIC for the site identified in the experimental crystal data (p=0.006). However, predicted affinities in apoferritin, human serum albumin, and cytochrome C did not correlate with these six anesthetics’ known experimental EC50s. A weak correlation between the predicted affinities and the octanol/water partition coefficients was observed for the sites in GLIC. Conclusion We demonstrated that anesthetic binding sites and relative affinities can be predicted using docking calculations in an automatic docking server (Autodock) for both water soluble and membrane proteins. Correlation of predicted affinity and EC50 for six commonly used general anesthetics was only observed in GLIC, a member of a protein family relevant to anesthetic mechanism. PMID:22392968

  9. Direct Retrieval of Exterior Orientation Parameters Using A 2-D Projective Transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seedahmed, Gamal H.

    2006-09-01

    Direct solutions are very attractive because they obviate the need for initial approximations associated with non-linear solutions. The Direct Linear Transformation (DLT) establishes itself as a method of choice for direct solutions in photogrammetry and other fields. The use of the DLT with coplanar object space points leads to a rank deficient model. This rank deficient model leaves the DLT defined up to a 2-D projective transformation, which makes the direct retrieval of the exterior orientation parameters (EOPs) a non-trivial task. This paper presents a novel direct algorithm to retrieve the EOPs from the 2-D projective transformation. It is basedmore » on a direct relationship between the 2-D projective transformation and the collinearity model using homogeneous coordinates representation. This representation offers a direct matrix correspondence between the 2-D projective transformation parameters and the collinearity model parameters. This correspondence lends itself to a direct matrix factorization to retrieve the EOPs. An important step in the proposed algorithm is a normalization process that provides the actual link between the 2-D projective transformation and the collinearity model. This paper explains the theoretical basis of the proposed algorithm as well as the necessary steps for its practical implementation. In addition, numerical examples are provided to demonstrate its validity.« less

  10. Computational nuclear quantum many-body problem: The UNEDF project

    NASA Astrophysics Data System (ADS)

    Bogner, S.; Bulgac, A.; Carlson, J.; Engel, J.; Fann, G.; Furnstahl, R. J.; Gandolfi, S.; Hagen, G.; Horoi, M.; Johnson, C.; Kortelainen, M.; Lusk, E.; Maris, P.; Nam, H.; Navratil, P.; Nazarewicz, W.; Ng, E.; Nobre, G. P. A.; Ormand, E.; Papenbrock, T.; Pei, J.; Pieper, S. C.; Quaglioni, S.; Roche, K. J.; Sarich, J.; Schunck, N.; Sosonkina, M.; Terasaki, J.; Thompson, I.; Vary, J. P.; Wild, S. M.

    2013-10-01

    The UNEDF project was a large-scale collaborative effort that applied high-performance computing to the nuclear quantum many-body problem. The primary focus of the project was on constructing, validating, and applying an optimized nuclear energy density functional, which entailed a wide range of pioneering developments in microscopic nuclear structure and reactions, algorithms, high-performance computing, and uncertainty quantification. UNEDF demonstrated that close associations among nuclear physicists, mathematicians, and computer scientists can lead to novel physics outcomes built on algorithmic innovations and computational developments. This review showcases a wide range of UNEDF science results to illustrate this interplay.

  11. Estimation of contour motion and deformation for nonrigid object tracking

    NASA Astrophysics Data System (ADS)

    Shao, Jie; Porikli, Fatih; Chellappa, Rama

    2007-08-01

    We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.

  12. info-gibbs: a motif discovery algorithm that directly optimizes information content during sampling.

    PubMed

    Defrance, Matthieu; van Helden, Jacques

    2009-10-15

    Discovering cis-regulatory elements in genome sequence remains a challenging issue. Several methods rely on the optimization of some target scoring function. The information content (IC) or relative entropy of the motif has proven to be a good estimator of transcription factor DNA binding affinity. However, these information-based metrics are usually used as a posteriori statistics rather than during the motif search process itself. We introduce here info-gibbs, a Gibbs sampling algorithm that efficiently optimizes the IC or the log-likelihood ratio (LLR) of the motif while keeping computation time low. The method compares well with existing methods like MEME, BioProspector, Gibbs or GAME on both synthetic and biological datasets. Our study shows that motif discovery techniques can be enhanced by directly focusing the search on the motif IC or the motif LLR. http://rsat.ulb.ac.be/rsat/info-gibbs

  13. Development of a New De Novo Design Algorithm for Exploring Chemical Space.

    PubMed

    Mishima, Kazuaki; Kaneko, Hiromasa; Funatsu, Kimito

    2014-12-01

    In the first stage of development of new drugs, various lead compounds with high activity are required. To design such compounds, we focus on chemical space defined by structural descriptors. New compounds close to areas where highly active compounds exist will show the same degree of activity. We have developed a new de novo design system to search a target area in chemical space. First, highly active compounds are manually selected as initial seeds. Then, the seeds are entered into our system, and structures slightly different from the seeds are generated and pooled. Next, seeds are selected from the new structure pool based on the distance from target coordinates on the map. To test the algorithm, we used two datasets of ligand binding affinity and showed that the proposed generator could produce diverse virtual compounds that had high activity in docking simulations. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Combined Tensor Fitting and TV Regularization in Diffusion Tensor Imaging Based on a Riemannian Manifold Approach.

    PubMed

    Baust, Maximilian; Weinmann, Andreas; Wieczorek, Matthias; Lasser, Tobias; Storath, Martin; Navab, Nassir

    2016-08-01

    In this paper, we consider combined TV denoising and diffusion tensor fitting in DTI using the affine-invariant Riemannian metric on the space of diffusion tensors. Instead of first fitting the diffusion tensors, and then denoising them, we define a suitable TV type energy functional which incorporates the measured DWIs (using an inverse problem setup) and which measures the nearness of neighboring tensors in the manifold. To approach this functional, we propose generalized forward- backward splitting algorithms which combine an explicit and several implicit steps performed on a decomposition of the functional. We validate the performance of the derived algorithms on synthetic and real DTI data. In particular, we work on real 3D data. To our knowledge, the present paper describes the first approach to TV regularization in a combined manifold and inverse problem setup.

  15. A quasi-dense matching approach and its calibration application with Internet photos.

    PubMed

    Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei

    2015-03-01

    This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.

  16. Performance and policy dimensions in internet routing

    NASA Technical Reports Server (NTRS)

    Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.; Thyagarajan, Ajit

    1995-01-01

    The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary.

  17. Real Time Intelligent Target Detection and Analysis with Machine Vision

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Padgett, Curtis; Brown, Kenneth

    2000-01-01

    We present an algorithm for detecting a specified set of targets for an Automatic Target Recognition (ATR) application. ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. We address the problem of discriminating between targets and nontarget objects in a scene by evaluating 40x40 image blocks belonging to an image. Each image block is first projected onto a set of templates specifically designed to separate images of targets embedded in a typical background scene from those background images without targets. These filters are found using directed principal component analysis which maximally separates the two groups. The projected images are then clustered into one of n classes based on a minimum distance to a set of n cluster prototypes. These cluster prototypes have previously been identified using a modified clustering algorithm based on prior sensed data. Each projected image pattern is then fed into the associated cluster's trained neural network for classification. A detailed description of our algorithm will be given in this paper. We outline our methodology for designing the templates, describe our modified clustering algorithm, and provide details on the neural network classifiers. Evaluation of the overall algorithm demonstrates that our detection rates approach 96% with a false positive rate of less than 0.03%.

  18. Inverse determination of the penalty parameter in penalized weighted least-squares algorithm for noise reduction of low-dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Guan, Huaiqun; Solberg, Timothy

    2011-07-15

    Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less

  19. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    NASA Astrophysics Data System (ADS)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  20. Testing Algorithmic Skills in Traditional and Non-Traditional Programming Environments

    ERIC Educational Resources Information Center

    Csernoch, Mária; Biró, Piroska; Máth, János; Abari, Kálmán

    2015-01-01

    The Testing Algorithmic and Application Skills (TAaAS) project was launched in the 2011/2012 academic year to test first year students of Informatics, focusing on their algorithmic skills in traditional and non-traditional programming environments, and on the transference of their knowledge of Informatics from secondary to tertiary education. The…

  1. The ESA Cloud CCI project: Generation of Multi Sensor consistent Cloud Properties with an Optimal Estimation Based Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Jerg, M.; Stengel, M.; Hollmann, R.; Poulsen, C.

    2012-04-01

    The ultimate objective of the ESA Climate Change Initiative (CCI) Cloud project is to provide long-term coherent cloud property data sets exploiting and improving on the synergetic capabilities of past, existing, and upcoming European and American satellite missions. The synergetic approach allows not only for improved accuracy and extended temporal and spatial sampling of retrieved cloud properties better than those provided by single instruments alone but potentially also for improved (inter-)calibration and enhanced homogeneity and stability of the derived time series. Such advances are required by the scientific community to facilitate further progress in satellite-based climate monitoring, which leads to a better understanding of climate. Some of the primary objectives of ESA Cloud CCI Cloud are (1) the development of inter-calibrated radiance data sets, so called Fundamental Climate Data Records - for ESA and non ESA instruments through an international collaboration, (2) the development of an optimal estimation based retrieval framework for cloud related essential climate variables like cloud cover, cloud top height and temperature, liquid and ice water path, and (3) the development of two multi-annual global data sets for the mentioned cloud properties including uncertainty estimates. These two data sets are characterized by different combinations of satellite systems: the AVHRR heritage product comprising (A)ATSR, AVHRR and MODIS and the novel (A)ATSR - MERIS product which is based on a synergetic retrieval using both instruments. Both datasets cover the years 2007-2009 in the first project phase. ESA Cloud CCI will also carry out a comprehensive validation of the cloud property products and provide a common data base as in the framework of the Global Energy and Water Cycle Experiment (GEWEX). The presentation will give an overview of the ESA Cloud CCI project and its goals and approaches and then continue with results from the Round Robin algorithm comparison exercise carried out at the beginning of the project which included three algorithms. The purpose of the exercise was to assess and compare existing cloud retrieval algorithms in order to chose one of them as backbone of the retrieval system and also identify areas of potential improvement and general strengths and weaknesses of the algorithm. Furthermore the presentation will elaborate on the optimal estimation algorithm subsequently chosen to derive the heritage product and which is presently further developed and will be employed for the AVHRR heritage product. The algorithm's capabilities to coherently and simultaneously process all radiative input and yield retrieval parameters together with associated uncertainty estimates will be presented together with first results for the heritage product. In the course of the project the algorithm is being developed into a freely and publicly available community retrieval system for interested scientists.

  2. A new pivoting and iterative text detection algorithm for biomedical images.

    PubMed

    Xu, Songhua; Krauthammer, Michael

    2010-12-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper's key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. We demonstrate that our projection histogram-based text detection approach is well suited for text detection in biomedical images, and that the iterative application of the algorithm boosts performance to an F score of .60. We provide a C++ implementation of our algorithm freely available for academic use. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. On-Line Point Positioning with Single Frame Camera Data

    DTIC Science & Technology

    1992-03-15

    tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile

  5. Fast projection/backprojection and incremental methods applied to synchrotron light tomographic reconstruction.

    PubMed

    de Lima, Camila; Salomão Helou, Elias

    2018-01-01

    Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N 3 ) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N 2 logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N 2 logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.

  6. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    PubMed

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  7. A Local Fast Marching-Based Diffusion Tensor Image Registration Algorithm by Simultaneously Considering Spatial Deformation and Tensor Orientation

    PubMed Central

    Xue, Zhong; Li, Hai; Guo, Lei; Wong, Stephen T.C.

    2010-01-01

    It is a key step to spatially align diffusion tensor images (DTI) to quantitatively compare neural images obtained from different subjects or the same subject at different timepoints. Different from traditional scalar or multi-channel image registration methods, tensor orientation should be considered in DTI registration. Recently, several DTI registration methods have been proposed in the literature, but deformation fields are purely dependent on the tensor features not the whole tensor information. Other methods, such as the piece-wise affine transformation and the diffeomorphic non-linear registration algorithms, use analytical gradients of the registration objective functions by simultaneously considering the reorientation and deformation of tensors during the registration. However, only relatively local tensor information such as voxel-wise tensor-similarity, is utilized. This paper proposes a new DTI image registration algorithm, called local fast marching (FM)-based simultaneous registration. The algorithm not only considers the orientation of tensors during registration but also utilizes the neighborhood tensor information of each voxel to drive the deformation, and such neighborhood tensor information is extracted from a local fast marching algorithm around the voxels of interest. These local fast marching-based tensor features efficiently reflect the diffusion patterns around each voxel within a spherical neighborhood and can capture relatively distinctive features of the anatomical structures. Using simulated and real DTI human brain data the experimental results show that the proposed algorithm is more accurate compared with the FA-based registration and is more efficient than its counterpart, the neighborhood tensor similarity-based registration. PMID:20382233

  8. Network intrusion detection based on a general regression neural network optimized by an improved artificial immune algorithm.

    PubMed

    Wu, Jianfa; Peng, Dahao; Li, Zhuping; Zhao, Li; Ling, Huanzhang

    2015-01-01

    To effectively and accurately detect and classify network intrusion data, this paper introduces a general regression neural network (GRNN) based on the artificial immune algorithm with elitist strategies (AIAE). The elitist archive and elitist crossover were combined with the artificial immune algorithm (AIA) to produce the AIAE-GRNN algorithm, with the aim of improving its adaptivity and accuracy. In this paper, the mean square errors (MSEs) were considered the affinity function. The AIAE was used to optimize the smooth factors of the GRNN; then, the optimal smooth factor was solved and substituted into the trained GRNN. Thus, the intrusive data were classified. The paper selected a GRNN that was separately optimized using a genetic algorithm (GA), particle swarm optimization (PSO), and fuzzy C-mean clustering (FCM) to enable a comparison of these approaches. As shown in the results, the AIAE-GRNN achieves a higher classification accuracy than PSO-GRNN, but the running time of AIAE-GRNN is long, which was proved first. FCM and GA-GRNN were eliminated because of their deficiencies in terms of accuracy and convergence. To improve the running speed, the paper adopted principal component analysis (PCA) to reduce the dimensions of the intrusive data. With the reduction in dimensionality, the PCA-AIAE-GRNN decreases in accuracy less and has better convergence than the PCA-PSO-GRNN, and the running speed of the PCA-AIAE-GRNN was relatively improved. The experimental results show that the AIAE-GRNN has a higher robustness and accuracy than the other algorithms considered and can thus be used to classify the intrusive data.

  9. Data-Parallel Algorithm for Contour Tree Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer; Ahrens, James Paul; Carr, Hamish

    2017-01-19

    The goal of this project is to develop algorithms for additional visualization and analysis filters in order to expand the functionality of the VTK-m toolkit to support less critical but commonly used operators.

  10. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms.

    PubMed

    Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael

    2014-10-01

    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  12. Low dose reconstruction algorithm for differential phase contrast imaging.

    PubMed

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  13. ODTbrain: a Python library for full-view, dense diffraction tomography.

    PubMed

    Müller, Paul; Schürmann, Mirjam; Guck, Jochen

    2015-11-04

    Analyzing the three-dimensional (3D) refractive index distribution of a single cell makes it possible to describe and characterize its inner structure in a marker-free manner. A dense, full-view tomographic data set is a set of images of a cell acquired for multiple rotational positions, densely distributed from 0 to 360 degrees. The reconstruction is commonly realized by projection tomography, which is based on the inversion of the Radon transform. The reconstruction quality of projection tomography is greatly improved when first order scattering, which becomes relevant when the imaging wavelength is comparable to the characteristic object size, is taken into account. This advanced reconstruction technique is called diffraction tomography. While many implementations of projection tomography are available today, there is no publicly available implementation of diffraction tomography so far. We present a Python library that implements the backpropagation algorithm for diffraction tomography in 3D. By establishing benchmarks based on finite-difference time-domain (FDTD) simulations, we showcase the superiority of the backpropagation algorithm over the backprojection algorithm. Furthermore, we discuss how measurment parameters influence the reconstructed refractive index distribution and we also give insights into the applicability of diffraction tomography to biological cells. The present software library contains a robust implementation of the backpropagation algorithm. The algorithm is ideally suited for the application to biological cells. Furthermore, the implementation is a drop-in replacement for the classical backprojection algorithm and is made available to the large user community of the Python programming language.

  14. Modeling and new equipment definition for the vibration isolation box equipment system

    NASA Technical Reports Server (NTRS)

    Sani, Robert L.

    1993-01-01

    Our MSAD-funded research project is to provide numerical modeling support for the VIBES (Vibration Isolation Box Experiment System) which is an IML2 flight experiment being built by the Japanese research team of Dr. H. Azuma of the Japanese National Aerospace Laboratory. During this reporting period, the following have been accomplished: A semi-consistent mass finite element projection algorithm for 2D and 3D Boussinesq flows has been implemented on Sun, HP And Cray Platforms. The algorithm has better phase speed accuracy than similar finite difference or lumped mass finite element algorithms, an attribute which is essential for addressing realistic g-jitter effects as well as convectively-dominated transient systems. The projection algorithm has been benchmarked against solutions generated via the commercial code FIDAP. The algorithm appears to be accurate as well as computationally efficient. Optimization and potential parallelization studies are underway. Our implementation to date has focused on execution of the basic algorithm with at most a concern for vectorization. The initial time-varying gravity Boussinesq flow simulation is being set up. The mesh is being designed and the input file is being generated. Some preliminary 'small mesh' cases will be attempted on our HP9000/735 while our request to MSAD for supercomputing resources is being addressed. The Japanese research team for VIBES was visited, the current set up and status of the physical experiment was obtained and ongoing E-Mail communication link was established.

  15. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    NASA Astrophysics Data System (ADS)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  16. An Efficient Distributed Compressed Sensing Algorithm for Decentralized Sensor Network.

    PubMed

    Liu, Jing; Huang, Kaiyu; Zhang, Guoxian

    2017-04-20

    We consider the joint sparsity Model 1 (JSM-1) in a decentralized scenario, where a number of sensors are connected through a network and there is no fusion center. A novel algorithm, named distributed compact sensing matrix pursuit (DCSMP), is proposed to exploit the computational and communication capabilities of the sensor nodes. In contrast to the conventional distributed compressed sensing algorithms adopting a random sensing matrix, the proposed algorithm focuses on the deterministic sensing matrices built directly on the real acquisition systems. The proposed DCSMP algorithm can be divided into two independent parts, the common and innovation support set estimation processes. The goal of the common support set estimation process is to obtain an estimated common support set by fusing the candidate support set information from an individual node and its neighboring nodes. In the following innovation support set estimation process, the measurement vector is projected into a subspace that is perpendicular to the subspace spanned by the columns indexed by the estimated common support set, to remove the impact of the estimated common support set. We can then search the innovation support set using an orthogonal matching pursuit (OMP) algorithm based on the projected measurement vector and projected sensing matrix. In the proposed DCSMP algorithm, the process of estimating the common component/support set is decoupled with that of estimating the innovation component/support set. Thus, the inaccurately estimated common support set will have no impact on estimating the innovation support set. It is proven that under the condition the estimated common support set contains the true common support set, the proposed algorithm can find the true innovation set correctly. Moreover, since the innovation support set estimation process is independent of the common support set estimation process, there is no requirement for the cardinality of both sets; thus, the proposed DCSMP algorithm is capable of tackling the unknown sparsity problem successfully.

  17. Measuring Norfloxacin Binding to Trypsin Using a Fluorescence Quenching Assay in an Upper-Division, Integrated Laboratory Course

    ERIC Educational Resources Information Center

    Hicks, Katherine A.

    2016-01-01

    Fluorescence quenching assays are often used to measure dissociation constants that quantify the binding affinity between small molecules and proteins. In an upper-division undergraduate laboratory course, where students work on projects using a guided inquiry-based approach, a binding titration experiment at physiological pH is performed to…

  18. Evaluation of Computer Tools for Idea Generation and Team Formation in Project-Based Learning

    ERIC Educational Resources Information Center

    Ardaiz-Villanueva, Oscar; Nicuesa-Chacon, Xabier; Brene-Artazcoz, Oscar; Sanz de Acedo Lizarraga, Maria Luisa; Sanz de Acedo Baquedano, Maria Teresa

    2011-01-01

    The main objective of this research was to validate the effectiveness of Wikideas and Creativity Connector tools to stimulate the generation of ideas and originality by university students organized into groups according to their indexes of creativity and affinity. Another goal of the study was to evaluate the classroom climate created by these…

  19. EFFECT OF HUMIC ACID ON UPTAKE AND TRANSFER OF COPPER FROM MICROBES TO CILIATES TO COPEPODS

    EPA Science Inventory

    This research is part of an ongoing project designed to determine the effect of humic acid on the uptake and transfer of metals by marine organisms at the lower end of the food chain. Binding affinities for Cu, Cd, Zn, and Cr to Suwannee River humic acid were determined at variou...

  20. Project: Internal communications

    NASA Technical Reports Server (NTRS)

    Black, Lydia

    1994-01-01

    The purpose of this study was to ascertain the perceived information needs of NASA Langley employees. One hundred and twelve face-to-face interviews were conducted with a representative sample of aero-space technologists, administrative professionals, technicians. and secretarial/clerical personnel. Results of employee perceptions are analyzed and summarized using affinity diagramming. Particular strategies to maximize use of existing internal communication networks are discussed.

  1. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  2. A Node Linkage Approach for Sequential Pattern Mining

    PubMed Central

    Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel

    2014-01-01

    Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123

  3. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells

    PubMed Central

    Kim, Mary S.; Tsutsui, Kenta; Stern, Michael D.; Lakatta, Edward G.; Maltsev, Victor A.

    2017-01-01

    Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle cells and Ca2+ puffs and syntillas in neurons. PMID:28683095

  4. Comparison of three methods for registration of abdominal/pelvic volume data sets from functional-anatomic scans

    NASA Astrophysics Data System (ADS)

    Mahmoud, Faaiza; Ton, Anthony; Crafoord, Joakim; Kramer, Elissa L.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.; Zeleznik, Michael P.

    2000-06-01

    The purpose of this work was to evaluate three volumetric registration methods in terms of technique, user-friendliness and time requirements. CT and SPECT data from 11 patients were interactively registered using: a 3D method involving only affine transformation; a mixed 3D - 2D non-affine (warping) method; and a 3D non-affine (warping) method. In the first method representative isosurfaces are generated from the anatomical images. Registration proceeds through translation, rotation, and scaling in all three space variables. Resulting isosurfaces are fused and quantitative measurements are possible. In the second method, the 3D volumes are rendered co-planar by performing an oblique projection. Corresponding landmark pairs are chosen on matching axial slice sets. A polynomial warp is then applied. This method has undergone extensive validation and was used to evaluate the results. The third method employs visualization tools. The data model allows images to be localized within two separate volumes. Landmarks are chosen on separate slices. Polynomial warping coefficients are generated and data points from one volume are moved to the corresponding new positions. The two landmark methods were the least time consuming (10 to 30 minutes from start to finish), but did demand a good knowledge of anatomy. The affine method was tedious and required a fair understanding of 3D geometry.

  5. Detection of Nitrogen Content in Rubber Leaves Using Near-Infrared (NIR) Spectroscopy with Correlation-Based Successive Projections Algorithm (SPA).

    PubMed

    Tang, Rongnian; Chen, Xupeng; Li, Chuang

    2018-05-01

    Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.

  6. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  7. Online Sequential Projection Vector Machine with Adaptive Data Mean Update

    PubMed Central

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958

  8. Online Sequential Projection Vector Machine with Adaptive Data Mean Update.

    PubMed

    Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei

    2016-01-01

    We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.

  9. MPL-Net data products available at co-located AERONET sites and field experiment locations

    NASA Astrophysics Data System (ADS)

    Welton, E. J.; Campbell, J. R.; Berkoff, T. A.

    2002-05-01

    Micro-pulse lidar (MPL) systems are small, eye-safe lidars capable of profiling the vertical distribution of aerosol and cloud layers. There are now over 20 MPL systems around the world, and they have been used in numerous field experiments. A new project was started at NASA Goddard Space Flight Center in 2000. The new project, MPL-Net, is a coordinated network of long-time MPL sites. The network also supports a limited number of field experiments each year. Most MPL-Net sites and field locations are co-located with AERONET sunphotometers. At these locations, the AERONET and MPL-Net data are combined together to provide both column and vertically resolved aerosol and cloud measurements. The MPL-Net project coordinates the maintenance and repair for all instruments in the network. In addition, data is archived and processed by the project using common, standardized algorithms that have been developed and utilized over the past 10 years. These procedures ensure that stable, calibrated MPL systems are operating at sites and that the data quality remains high. Rigorous uncertainty calculations are performed on all MPL-Net data products. Automated, real-time level 1.0 data processing algorithms have been developed and are operational. Level 1.0 algorithms are used to process the raw MPL data into the form of range corrected, uncalibrated lidar signals. Automated, real-time level 1.5 algorithms have also been developed and are now operational. Level 1.5 algorithms are used to calibrate the MPL systems, determine cloud and aerosol layer heights, and calculate the optical depth and extinction profile of the aerosol boundary layer. The co-located AERONET sunphotometer provides the aerosol optical depth, which is used as a constraint to solve for the extinction-to-backscatter ratio and the aerosol extinction profile. Browse images and data files are available on the MPL-Net web-site. An overview of the processing algorithms and initial results from selected sites and field experiments will be presented. The capability of the MPL-Net project to produce automated real-time (next day) profiles of aerosol extinction will be shown. Finally, early results from Level 2.0 and Level 3.0 algorithms currently under development will be presented. The level 3.0 data provide continuous (day/night) retrievals of multiple aerosol and cloud heights, and optical properties of each layer detected.

  10. ProperCAD: A portable object-oriented parallel environment for VLSI CAD

    NASA Technical Reports Server (NTRS)

    Ramkumar, Balkrishna; Banerjee, Prithviraj

    1993-01-01

    Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.

  11. DNA algorithms of implementing biomolecular databases on a biological computer.

    PubMed

    Chang, Weng-Long; Vasilakos, Athanasios V

    2015-01-01

    In this paper, DNA algorithms are proposed to perform eight operations of relational algebra (calculus), which include Cartesian product, union, set difference, selection, projection, intersection, join, and division, on biomolecular relational databases.

  12. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    NASA Astrophysics Data System (ADS)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST software, with new detection, filtering and classification algorithms. Particularly, dedicated filtering algorithm development based on Wavelet filtering was exploited for the improvement of oil spill detection and classification. In this work we present the functionalities of the developed software and the main results in support of the developed algorithm validity.

  13. Project risk management in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya

    2018-03-01

    This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.

  14. Improving Cancer Detection and Dose Efficiency in Dedicated Breast Cancer CT

    DTIC Science & Technology

    2010-02-01

    source trajectory and data truncation, which can however be solved with the back-projection filtration ( BPF ) algorithm [6,7]. I have used the BPF ...high to low radiation dose levels. I have investigated noise properties in images reconstructed by use of FDK and BPF algorithms at different noise...analytic algorithms such as the FDK and BPF algorithms are applied to sparse-view data, the reconstruction images will contain artifacts such as streak

  15. Algorithm of OMA for large-scale orthology inference

    PubMed Central

    Roth, Alexander CJ; Gonnet, Gaston H; Dessimoz, Christophe

    2008-01-01

    Background OMA is a project that aims to identify orthologs within publicly available, complete genomes. With 657 genomes analyzed to date, OMA is one of the largest projects of its kind. Results The algorithm of OMA improves upon standard bidirectional best-hit approach in several respects: it uses evolutionary distances instead of scores, considers distance inference uncertainty, includes many-to-many orthologous relations, and accounts for differential gene losses. Herein, we describe in detail the algorithm for inference of orthology and provide the rationale for parameter selection through multiple tests. Conclusion OMA contains several novel improvement ideas for orthology inference and provides a unique dataset of large-scale orthology assignments. PMID:19055798

  16. Lightning Jump Algorithm and Relation to Thunderstorm Cell Tracking, GLM Proxy and other Meteorological Measurements

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Carey, Larry; Cecil, Dan; Bateman, Monte; Stano, Geoffrey; Goodman, Steve

    2012-01-01

    Objective of project is to refine, adapt and demonstrate the Lightning Jump Algorithm (LJA) for transition to GOES -R GLM (Geostationary Lightning Mapper) readiness and to establish a path to operations Ongoing work . reducing risk in GLM lightning proxy, cell tracking, LJA algorithm automation, and data fusion (e.g., radar + lightning).

  17. Searching Information Sources in Networks

    DTIC Science & Technology

    2017-06-14

    SECURITY CLASSIFICATION OF: During the course of this project, we made significant progresses in multiple directions of the information detection...result on information source detection on non-tree networks; (2) The development of information source localization algorithms to detect multiple... information sources. The algorithms have provable performance guarantees and outperform existing algorithms in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND

  18. An algorithm for the split-feasibility problems with application to the split-equality problem.

    PubMed

    Chuang, Chih-Sheng; Chen, Chi-Ming

    2017-01-01

    In this paper, we study the split-feasibility problem in Hilbert spaces by using the projected reflected gradient algorithm. As applications, we study the convex linear inverse problem and the split-equality problem in Hilbert spaces, and we give new algorithms for these problems. Finally, numerical results are given for our main results.

  19. Teaching Computation in Primary School without Traditional Written Algorithms

    ERIC Educational Resources Information Center

    Hartnett, Judy

    2015-01-01

    Concerns regarding the dominance of the traditional written algorithms in schools have been raised by many mathematics educators, yet the teaching of these procedures remains a dominant focus in in primary schools. This paper reports on a project in one school where the staff agreed to put the teaching of the traditional written algorithm aside,…

  20. An opposite view data replacement approach for reducing artifacts due to metallic dental objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazdi, Mehran; Lari, Meghdad Asadi; Bernier, Gaston

    Purpose: To present a conceptually new method for metal artifact reduction (MAR) that can be used on patients with multiple objects within the scan plane that are also of small sized along the longitudinal (scanning) direction, such as dental fillings. Methods: The proposed algorithm, named opposite view replacement, achieves MAR by first detecting the projection data affected by metal objects and then replacing the affected projections by the corresponding opposite view projections, which are not affected by metal objects. The authors also applied a fading process to avoid producing any discontinuities in the boundary of the affected projection areas inmore » the sinogram. A skull phantom with and without a variety of dental metal inserts was made to extract the performance metric of the algorithm. A head and neck case, typical of IMRT planning, was also tested. Results: The reconstructed CT images based on this new replacement scheme show a significant improvement in image quality for patients with metallic dental objects compared to the MAR algorithms based on the interpolation scheme. For the phantom, the authors showed that the artifact reduction algorithm can efficiently recover the CT numbers in the area next to the metallic objects. Conclusions: The authors presented a new and efficient method for artifact reduction due to multiple small metallic objects. The obtained results from phantoms and clinical cases fully validate the proposed approach.« less

  1. Topometry of technical and biological objects by fringe projection

    NASA Astrophysics Data System (ADS)

    Windecker, R.; Tiziani, H. J.

    1995-07-01

    Fringe projection is a fast and accurate technique for obtaining the topometry of a wide range of surfaces. Here some features of the principle are described, together with the possibilities of adapting this technique for the measurement of vaulted surfaces. We discuss various methods of phase evaluation and compare them with simulated computer data to obtain the resolution limits. Under certain restrictions a semispatial algorithm, called the modified Fourier analysis algorithm, gives the best results. One special subject of interest is the application of fringe projection for the measurement of the three-dimensional surface of the cornea. First results of in vivo measurements are presented.

  2. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  3. ProjectQ: Compiling quantum programs for various backends

    NASA Astrophysics Data System (ADS)

    Haener, Thomas; Steiger, Damian S.; Troyer, Matthias

    In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.

  4. Simulated annealing algorithm for solving chambering student-case assignment problem

    NASA Astrophysics Data System (ADS)

    Ghazali, Saadiah; Abdul-Rahman, Syariza

    2015-12-01

    The problem related to project assignment problem is one of popular practical problem that appear nowadays. The challenge of solving the problem raise whenever the complexity related to preferences, the existence of real-world constraints and problem size increased. This study focuses on solving a chambering student-case assignment problem by using a simulated annealing algorithm where this problem is classified under project assignment problem. The project assignment problem is considered as hard combinatorial optimization problem and solving it using a metaheuristic approach is an advantage because it could return a good solution in a reasonable time. The problem of assigning chambering students to cases has never been addressed in the literature before. For the proposed problem, it is essential for law graduates to peruse in chambers before they are qualified to become legal counselor. Thus, assigning the chambering students to cases is a critically needed especially when involving many preferences. Hence, this study presents a preliminary study of the proposed project assignment problem. The objective of the study is to minimize the total completion time for all students in solving the given cases. This study employed a minimum cost greedy heuristic in order to construct a feasible initial solution. The search then is preceded with a simulated annealing algorithm for further improvement of solution quality. The analysis of the obtained result has shown that the proposed simulated annealing algorithm has greatly improved the solution constructed by the minimum cost greedy heuristic. Hence, this research has demonstrated the advantages of solving project assignment problem by using metaheuristic techniques.

  5. Advanced CHP Control Algorithms: Scope Specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katipamula, Srinivas; Brambley, Michael R.

    2006-04-28

    The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.

  6. Testing algorithms for a passenger train braking performance model.

    DOT National Transportation Integrated Search

    2011-09-01

    "The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...

  7. Optimizing construction quality management of pavements using mechanistic performance analysis.

    DOT National Transportation Integrated Search

    2004-08-01

    This report presents a statistical-based algorithm that was developed to reconcile the results from several pavement performance models used in the state of practice with systematic process control techniques. These algorithms identify project-specif...

  8. A homotopy algorithm for digital optimal projection control GASD-HADOC

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.

    1993-01-01

    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.

  9. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  10. Importing statistical measures into Artemis enhances gene identification in the Leishmania genome project.

    PubMed

    Aggarwal, Gautam; Worthey, E A; McDonagh, Paul D; Myler, Peter J

    2003-06-07

    Seattle Biomedical Research Institute (SBRI) as part of the Leishmania Genome Network (LGN) is sequencing chromosomes of the trypanosomatid protozoan species Leishmania major. At SBRI, chromosomal sequence is annotated using a combination of trained and untrained non-consensus gene-prediction algorithms with ARTEMIS, an annotation platform with rich and user-friendly interfaces. Here we describe a methodology used to import results from three different protein-coding gene-prediction algorithms (GLIMMER, TESTCODE and GENESCAN) into the ARTEMIS sequence viewer and annotation tool. Comparison of these methods, along with the CODONUSAGE algorithm built into ARTEMIS, shows the importance of combining methods to more accurately annotate the L. major genomic sequence. An improvised and powerful tool for gene prediction has been developed by importing data from widely-used algorithms into an existing annotation platform. This approach is especially fruitful in the Leishmania genome project where there is large proportion of novel genes requiring manual annotation.

  11. Sun-Relative Pointing for Dual-Axis Solar Trackers Employing Azimuth and Elevation Rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, Daniel; Hansen, Clifford W.

    Dual axis trackers employing azimuth and elevation rotations are common in the field of photovoltaic (PV) energy generation. Accurate sun-tracking algorithms are widely available. However, a steering algorithm has not been available to accurately point the tracker away from the sun such that a vector projection of the sun beam onto the tracker face falls along a desired path relative to the tracker face. We have developed an algorithm which produces the appropriate azimuth and elevation angles for a dual axis tracker when given the sun position, desired angle of incidence, and the desired projection of the sun beam ontomore » the tracker face. Development of this algorithm was inspired by the need to accurately steer a tracker to desired sun-relative positions in order to better characterize the electro-optical properties of PV and CPV modules.« less

  12. Intelligent Medical Systems for Aerospace Emergency Medical Services

    NASA Technical Reports Server (NTRS)

    Epler, John; Zimmer, Gary

    2004-01-01

    The purpose of this project is to develop a portable, hands free device for emergency medical decision support to be used in remote or confined settings by non-physician providers. Phase I of the project will entail the development of a voice-activated device that will utilize an intelligent algorithm to provide guidance in establishing an airway in an emergency situation. The interactive, hands free software will process requests for assistance based on verbal prompts and algorithmic decision-making. The device will allow the CMO to attend to the patient while receiving verbal instruction. The software will also feature graphic representations where it is felt helpful in aiding in procedures. We will also develop a training program to orient users to the algorithmic approach, the use of the hardware and specific procedural considerations. We will validate the efficacy of this mode of technology application by testing in the Johns Hopkins Department of Emergency Medicine. Phase I of the project will focus on the validation of the proposed algorithm, testing and validation of the decision making tool and modifications of medical equipment. In Phase 11, we will produce the first generation software for hands-free, interactive medical decision making for use in acute care environments.

  13. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The DataBridge: A System For Optimizing The Use Of Dark Data From The Long Tail Of Science

    NASA Astrophysics Data System (ADS)

    Lander, H.; Rajasekar, A.

    2015-12-01

    The DataBridge is a National Science Foundation funded collaborative project (OCI-1247652, OCI-1247602, OCI-1247663) designed to assist in the discovery of dark data sets from the long tail of science. The DataBridge aims to to build queryable communities of datasets using sociometric network analysis. This approach is being tested to evaluate the ability to leverage various forms of metadata to facilitate discovery of new knowledge. Each dataset in the Databridge has an associated name space used as a first level partitioning. In addition to testing known algorithms for SNA community building, the DataBridge project has built a message-based platform that allows users to provide their own algorithms for each of the stages in the community building process. The stages are: Signature Generation (SG): An SG algorithm creates a metadata signature for a dataset. Signature algorithms might use text metadata provided by the dataset creator or derive metadata. Relevance Algorithm (RA): An RA compares a pair of datasets and produces a similarity value between 0 and 1 for the two datasets. Sociometric Network Analysis (SNA): The SNA will operate on a similarity matrix produced by an RA to partition all of the datasets in the name space into a set of clusters. These clusters represent communities of closely related datasets. The DataBridge also includes a web application that produces a visual representation of the clustering. Future work includes a more complete application that will allow different types of searching of the network of datasets. The DataBridge approach is relevant to geoscience research and informatics. In this presentation we will outline the project, illustrate the deployment of the approach, and discuss other potential applications and next steps for the research such as applying this approach to models. In addition we will explore the relevance of DataBridge to other geoscience projects such as various EarthCube Building Blocks and DIBBS projects.

  15. ROBIN: a platform for evaluating automatic target recognition algorithms: I. Overview of the project and presentation of the SAGEM DS competition

    NASA Astrophysics Data System (ADS)

    Duclos, D.; Lonnoy, J.; Guillerm, Q.; Jurie, F.; Herbin, S.; D'Angelo, E.

    2008-04-01

    The last five years have seen a renewal of Automatic Target Recognition applications, mainly because of the latest advances in machine learning techniques. In this context, large collections of image datasets are essential for training algorithms as well as for their evaluation. Indeed, the recent proliferation of recognition algorithms, generally applied to slightly different problems, make their comparisons through clean evaluation campaigns necessary. The ROBIN project tries to fulfil these two needs by putting unclassified datasets, ground truths, competitions and metrics for the evaluation of ATR algorithms at the disposition of the scientific community. The scope of this project includes single and multi-class generic target detection and generic target recognition, in military and security contexts. From our knowledge, it is the first time that a database of this importance (several hundred thousands of visible and infrared hand annotated images) has been publicly released. Funded by the French Ministry of Defence (DGA) and by the French Ministry of Research, ROBIN is one of the ten Techno-vision projects. Techno-vision is a large and ambitious government initiative for building evaluation means for computer vision technologies, for various application contexts. ROBIN's consortium includes major companies and research centres involved in Computer Vision R&D in the field of defence: Bertin Technologies, CNES, ECA, DGA, EADS, INRIA, ONERA, MBDA, SAGEM, THALES. This paper, which first gives an overview of the whole project, is focused on one of ROBIN's key competitions, the SAGEM Defence Security database. This dataset contains more than eight hundred ground and aerial infrared images of six different vehicles in cluttered scenes including distracters. Two different sets of data are available for each target. The first set includes different views of each vehicle at close range in a "simple" background, and can be used to train algorithms. The second set contains many views of the same vehicle in different contexts and situations simulating operational scenarios.

  16. Design of an allosterically modulated doxycycline and doxorubicin drug-binding protein.

    PubMed

    Schmidt, Karin; Gardill, Bernd R; Kern, Alina; Kirchweger, Peter; Börsch, Michael; Muller, Yves A

    2018-05-14

    The allosteric interplay between distant functional sites present in a single protein provides for one of the most important regulatory mechanisms in biological systems. While the design of ligand-binding sites into proteins remains challenging, this holds even truer for the coupling of a newly engineered binding site to an allosteric mechanism that regulates the ligand affinity. Here it is shown how computational design algorithms enabled the introduction of doxycycline- and doxorubicin-binding sites into the serine proteinase inhibitor (serpin) family member α1-antichymotrypsin. Further engineering allowed exploitation of the proteinase-triggered serpin-typical S-to-R transition to modulate the ligand affinities. These design variants follow strategies observed in naturally occurring plasma globulins that allow for the targeted delivery of hormones in the blood. By analogy, we propose that the variants described in the present study could be further developed to allow for the delivery of the antibiotic doxycycline and the anticancer compound doxorubicin to tissues/locations that express specific proteinases, such as bacterial infection sites or tumor cells secreting matrix metalloproteinases.

  17. Pooled protein immunization for identification of cell surface antigens in Streptococcus sanguinis.

    PubMed

    Ge, Xiuchun; Kitten, Todd; Munro, Cindy L; Conrad, Daniel H; Xu, Ping

    2010-07-26

    Available bacterial genomes provide opportunities for screening vaccines by reverse vaccinology. Efficient identification of surface antigens is required to reduce time and animal cost in this technology. We developed an approach to identify surface antigens rapidly in Streptococcus sanguinis, a common infective endocarditis causative species. We applied bioinformatics for antigen prediction and pooled antigens for immunization. Forty-seven surface-exposed proteins including 28 lipoproteins and 19 cell wall-anchored proteins were chosen based on computer algorithms and comparative genomic analyses. Eight proteins among these candidates and 2 other proteins were pooled together to immunize rabbits. The antiserum reacted strongly with each protein and with S. sanguinis whole cells. Affinity chromatography was used to purify the antibodies to 9 of the antigen pool components. Competitive ELISA and FACS results indicated that these 9 proteins were exposed on S. sanguinis cell surfaces. The purified antibodies had demonstrable opsonic activity. The results indicate that immunization with pooled proteins, in combination with affinity purification, and comprehensive immunological assays may facilitate cell surface antigen identification to combat infectious diseases.

  18. Pooled Protein Immunization for Identification of Cell Surface Antigens in Streptococcus sanguinis

    PubMed Central

    Ge, Xiuchun; Kitten, Todd; Munro, Cindy L.; Conrad, Daniel H.; Xu, Ping

    2010-01-01

    Background Available bacterial genomes provide opportunities for screening vaccines by reverse vaccinology. Efficient identification of surface antigens is required to reduce time and animal cost in this technology. We developed an approach to identify surface antigens rapidly in Streptococcus sanguinis, a common infective endocarditis causative species. Methods and Findings We applied bioinformatics for antigen prediction and pooled antigens for immunization. Forty-seven surface-exposed proteins including 28 lipoproteins and 19 cell wall-anchored proteins were chosen based on computer algorithms and comparative genomic analyses. Eight proteins among these candidates and 2 other proteins were pooled together to immunize rabbits. The antiserum reacted strongly with each protein and with S. sanguinis whole cells. Affinity chromatography was used to purify the antibodies to 9 of the antigen pool components. Competitive ELISA and FACS results indicated that these 9 proteins were exposed on S. sanguinis cell surfaces. The purified antibodies had demonstrable opsonic activity. Conclusions The results indicate that immunization with pooled proteins, in combination with affinity purification, and comprehensive immunological assays may facilitate cell surface antigen identification to combat infectious diseases. PMID:20668678

  19. FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector.

    PubMed

    Schäfer, Dirk; Grass, Michael; van de Haar, Peter

    2011-07-01

    Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results. The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.

  20. Comparison of maximum intensity projection and digitally reconstructed radiographic projection for carotid artery stenosis measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hyde, Derek E.; Habets, Damiaan F.; Fox, Allan J.

    2007-07-15

    Digital subtraction angiography is being supplanted by three-dimensional imaging techniques in many clinical applications, leading to extensive use of maximum intensity projection (MIP) images to depict volumetric vascular data. The MIP algorithm produces intensity profiles that are different than conventional angiograms, and can also increase the vessel-to-tissue contrast-to-noise ratio. We evaluated the effect of the MIP algorithm in a clinical application where quantitative vessel measurement is important: internal carotid artery stenosis grading. Three-dimensional computed rotational angiography (CRA) was performed on 26 consecutive symptomatic patients to verify an internal carotid artery stenosis originally found using duplex ultrasound. These volumes of datamore » were visualized using two different postprocessing projection techniques: MIP and digitally reconstructed radiographic (DRR) projection. A DRR is a radiographic image simulating a conventional digitally subtracted angiogram, but it is derived computationally from the same CRA dataset as the MIP. By visualizing a single volume with two different projection techniques, the postprocessing effect of the MIP algorithm is isolated. Vessel measurements were made, according to the NASCET guidelines, and percentage stenosis grades were calculated. The paired t-test was used to determine if the measurement difference between the two techniques was statistically significant. The CRA technique provided an isotropic voxel spacing of 0.38 mm. The MIPs and DRRs had a mean signal-difference-to-noise-ratio of 30:1 and 26:1, respectively. Vessel measurements from MIPs were, on average, 0.17 mm larger than those from DRRs (P<0.0001). The NASCET-type stenosis grades tended to be underestimated on average by 2.4% with the MIP algorithm, although this was not statistically significant (P=0.09). The mean interobserver variability (standard deviation) of both the MIP and DRR images was 0.35 mm. It was concluded that the MIP algorithm slightly increased the apparent dimensions of the arteries, when applied to these intra-arterial CRA images. This subpixel increase was smaller than both the voxel size and interobserver variability, and was therefore not clinically relevant.« less

  1. Using Radar, Lidar, and Radiometer measurements to Classify Cloud Type and Study Middle-Level Cloud Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhien

    2010-06-29

    The project is mainly focused on the characterization of cloud macrophysical and microphysical properties, especially for mixed-phased clouds and middle level ice clouds by combining radar, lidar, and radiometer measurements available from the ACRF sites. First, an advanced mixed-phase cloud retrieval algorithm will be developed to cover all mixed-phase clouds observed at the ACRF NSA site. The algorithm will be applied to the ACRF NSA observations to generate a long-term arctic mixed-phase cloud product for model validations and arctic mixed-phase cloud processes studies. To improve the representation of arctic mixed-phase clouds in GCMs, an advanced understanding of mixed-phase cloud processesmore » is needed. By combining retrieved mixed-phase cloud microphysical properties with in situ data and large-scale meteorological data, the project aim to better understand the generations of ice crystals in supercooled water clouds, the maintenance mechanisms of the arctic mixed-phase clouds, and their connections with large-scale dynamics. The project will try to develop a new retrieval algorithm to study more complex mixed-phase clouds observed at the ACRF SGP site. Compared with optically thin ice clouds, optically thick middle level ice clouds are less studied because of limited available tools. The project will develop a new two wavelength radar technique for optically thick ice cloud study at SGP site by combining the MMCR with the W-band radar measurements. With this new algorithm, the SGP site will have a better capability to study all ice clouds. Another area of the proposal is to generate long-term cloud type classification product for the multiple ACRF sites. The cloud type classification product will not only facilitates the generation of the integrated cloud product by applying different retrieval algorithms to different types of clouds operationally, but will also support other research to better understand cloud properties and to validate model simulations. The ultimate goal is to improve our cloud classification algorithm into a VAP.« less

  2. The Psychopharmacology Algorithm Project at the Harvard South Shore Program: An Algorithm for Generalized Anxiety Disorder.

    PubMed

    Abejuela, Harmony Raylen; Osser, David N

    2016-01-01

    This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.

  3. Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms

    DTIC Science & Technology

    2010-01-01

    Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing

  4. Combining Structural Modeling with Ensemble Machine Learning to Accurately Predict Protein Fold Stability and Binding Affinity Effects upon Mutation

    PubMed Central

    Garcia Lopez, Sebastian; Kim, Philip M.

    2014-01-01

    Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403

  5. Engineering of an Extremely Thermostable Alpha/Beta Barrel Scaffold to Serve as a High Affinity Molecular Recognition Element for Use in Sensor Applications

    DTIC Science & Technology

    2015-12-23

    papers submitted or published that acknowledge ARO support from the start of the project to the date of this printing. List the papers, including...1. Koide, S. & Sidhu, S.S. The importance of being tyrosine: lessons in molecular recognition from minimalist synthetic binding proteins. ACS

  6. "Fraught with Wonderful Possibilities": Father Jimmy Tompkins and the Struggle for a Catholic Progressivism, 1912?1922

    ERIC Educational Resources Information Center

    Welton, Michael R.

    2005-01-01

    This essay probes beneath the "folk hero" image of Father J. J. Tompkins, the animating presence behind the world-famous Antigonish Movement. I argue that Father Tompkins' project can be rendered as an illustration of early modern progressive adult education with theological affinity to the liberation theologies of the 1960s and 1970s. My…

  7. MODEL FOR NON-EQUILIBRIUM BINDING AND AFFINITY CHROMATOGRAPHY WITH CHARACTERIZATION OF 8-HYDROXYQUINOLINE IMMOBILIZED ON CONTROLLED PORE GLASS USING A FLOW INJECTION SYSTEM WITH A PACKED MICRO-COLUMN. (R826694C651)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  8. Investigation of the application of remote sensing technology to environmental monitoring

    NASA Technical Reports Server (NTRS)

    Rader, M. L. (Principal Investigator)

    1980-01-01

    Activities and results are reported of a project to investigate the application of remote sensing technology developed for the LACIE, AgRISTARS, Forestry and other NASA remote sensing projects for the environmental monitoring of strip mining, industrial pollution, and acid rain. Following a remote sensing workshop for EPA personnel, the EOD clustering algorithm CLASSY was selected for evaluation by EPA as a possible candidate technology. LANDSAT data acquired for a North Dakota test sight was clustered in order to compare CLASSY with other algorithms.

  9. Phase retrieval with Fourier-weighted projections.

    PubMed

    Guizar-Sicairos, Manuel; Fienup, James R

    2008-03-01

    In coherent lensless imaging, the presence of image sidelobes, which arise as a natural consequence of the finite nature of the detector array, was early recognized as a convergence issue for phase retrieval algorithms that rely on an object support constraint. To mitigate the problem of truncated far-field measurement, a controlled analytic continuation by means of an iterative transform algorithm with weighted projections is proposed and tested. This approach avoids the use of sidelobe reduction windows and achieves full-resolution reconstructions.

  10. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.

    PubMed

    Zhang, Man; Wang, Guanyong; Zhang, Lei

    2017-10-26

    Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  11. Atmospheric River Tracking Method Intercomparison Project (ARTMIP): Science Goals and Preliminary Analysis

    NASA Astrophysics Data System (ADS)

    Shields, C. A.; Rutz, J. J.; Wehner, M. F.; Ralph, F. M.; Leung, L. R.

    2017-12-01

    The Atmospheric River Tracking Method Intercomparison Project (ARTMIP) is a community effort whose purpose is to quantify uncertainties in atmospheric river (AR) research solely due to different identification and tracking techniques. Atmospheric rivers transport significant amounts of moisture in long, narrow filamentary bands, typically travelling from the subtropics to the mid-latitudes. They are an important source of regional precipitation impacting local hydroclimate, and in extreme cases, cause severe flooding and infrastructure damage in local communities. Our understanding of ARs, from forecast skill to future climate projections, all hinge on how we define ARs. By comparing a diverse set of detection algorithms, the uncertainty in our definition of ARs, (including statistics and climatology), and the implications of those uncertainties, can be analyzed and quantified. ARTMIP is divided into two broad phases that aim to answer science questions impacted by choice of detection algorithm. How robust are AR metrics such as climatology, storm duration, and relationship to extreme precipitation? How are the AR metrics in future climate projections impacted by choice of algorithm? Some algorithms rely on threshold values for water vapor. In a warmer world, the background state, by definition, is moister due to the Clausius-Clapeyron relationship, and could potentially skew results. Can uncertainty bounds be accurately placed on each metric? Tier 1 participants will apply their algorithms to a high resolution common dataset (MERRA2) and provide the greater group AR metrics (frequency, location, duration, etc). Tier 2 research will encompass sensitivity studies regarding resolution, reanalysis choice, and future climate change scenarios. ARTMIP is currently in the Tier 1 Phase and will begin Tier 2 in 2018. Preliminary metrics and analysis from Tier 1 will be presented.

  12. i-rDNA: alignment-free algorithm for rapid in silico detection of ribosomal gene fragments from metagenomic sequence data sets.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Chadaram, Sudha; Mande, Sharmila S

    2011-11-30

    Obtaining accurate estimates of microbial diversity using rDNA profiling is the first step in most metagenomics projects. Consequently, most metagenomic projects spend considerable amounts of time, money and manpower for experimentally cloning, amplifying and sequencing the rDNA content in a metagenomic sample. In the second step, the entire genomic content of the metagenome is extracted, sequenced and analyzed. Since DNA sequences obtained in this second step also contain rDNA fragments, rapid in silico identification of these rDNA fragments would drastically reduce the cost, time and effort of current metagenomic projects by entirely bypassing the experimental steps of primer based rDNA amplification, cloning and sequencing. In this study, we present an algorithm called i-rDNA that can facilitate the rapid detection of 16S rDNA fragments from amongst millions of sequences in metagenomic data sets with high detection sensitivity. Performance evaluation with data sets/database variants simulating typical metagenomic scenarios indicates the significantly high detection sensitivity of i-rDNA. Moreover, i-rDNA can process a million sequences in less than an hour on a simple desktop with modest hardware specifications. In addition to the speed of execution, high sensitivity and low false positive rate, the utility of the algorithmic approach discussed in this paper is immense given that it would help in bypassing the entire experimental step of primer-based rDNA amplification, cloning and sequencing. Application of this algorithmic approach would thus drastically reduce the cost, time and human efforts invested in all metagenomic projects. A web-server for the i-rDNA algorithm is available at http://metagenomics.atc.tcs.com/i-rDNA/

  13. Final Report on DOE Project entitled Dynamic Optimized Advanced Scheduling of Bandwidth Demands for Large-Scale Science Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramamurthy, Byravamurthy

    2014-05-05

    In this project, developed scheduling frameworks for dynamic bandwidth demands for large-scale science applications. In particular, we developed scheduling algorithms for dynamic bandwidth demands in this project. Apart from theoretical approaches such as Integer Linear Programming, Tabu Search and Genetic Algorithm heuristics, we have utilized practical data from ESnet OSCARS project (from our DOE lab partners) to conduct realistic simulations of our approaches. We have disseminated our work through conference paper presentations and journal papers and a book chapter. In this project we addressed the problem of scheduling of lightpaths over optical wavelength division multiplexed (WDM) networks. We published severalmore » conference papers and journal papers on this topic. We also addressed the problems of joint allocation of computing, storage and networking resources in Grid/Cloud networks and proposed energy-efficient mechanisms for operatin optical WDM networks.« less

  14. An improved filtering algorithm for big read datasets and its application to single-cell assembly.

    PubMed

    Wedemeyer, Axel; Kliemann, Lasse; Srivastav, Anand; Schielke, Christian; Reusch, Thorsten B; Rosenstiel, Philip

    2017-07-03

    For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their k-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new algorithmic feature is the use of phred quality scores together with a detailed analysis of the k-mer counts to decide which reads to keep. We qualify and recommend parameters for our new read filtering algorithm. Guided by these parameters, we remove in terms of median 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical and efficient method for reducing read data and for speeding up the assembly process. This applies not only for single cell assembly, as shown in this paper, but also to other projects with high mean coverage datasets like metagenomic sequencing projects. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm .

  15. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-12-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.

  16. Rapid, topology-based particle tracking for high-resolution measurements of large complex 3D motion fields.

    PubMed

    Patel, Mohak; Leggett, Susan E; Landauer, Alexander K; Wong, Ian Y; Franck, Christian

    2018-04-03

    Spatiotemporal tracking of tracer particles or objects of interest can reveal localized behaviors in biological and physical systems. However, existing tracking algorithms are most effective for relatively low numbers of particles that undergo displacements smaller than their typical interparticle separation distance. Here, we demonstrate a single particle tracking algorithm to reconstruct large complex motion fields with large particle numbers, orders of magnitude larger than previously tractably resolvable, thus opening the door for attaining very high Nyquist spatial frequency motion recovery in the images. Our key innovations are feature vectors that encode nearest neighbor positions, a rigorous outlier removal scheme, and an iterative deformation warping scheme. We test this technique for its accuracy and computational efficacy using synthetically and experimentally generated 3D particle images, including non-affine deformation fields in soft materials, complex fluid flows, and cell-generated deformations. We augment this algorithm with additional particle information (e.g., color, size, or shape) to further enhance tracking accuracy for high gradient and large displacement fields. These applications demonstrate that this versatile technique can rapidly track unprecedented numbers of particles to resolve large and complex motion fields in 2D and 3D images, particularly when spatial correlations exist.

  17. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  18. A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images

    PubMed Central

    Xu, Songhua; Krauthammer, Michael

    2010-01-01

    There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803

  19. Projections for fast protein structure retrieval

    PubMed Central

    Bhattacharya, Sourangshu; Bhattacharyya, Chiranjib; Chandra, Nagasuma R

    2006-01-01

    Background In recent times, there has been an exponential rise in the number of protein structures in databases e.g. PDB. So, design of fast algorithms capable of querying such databases is becoming an increasingly important research issue. This paper reports an algorithm, motivated from spectral graph matching techniques, for retrieving protein structures similar to a query structure from a large protein structure database. Each protein structure is specified by the 3D coordinates of residues of the protein. The algorithm is based on a novel characterization of the residues, called projections, leading to a similarity measure between the residues of the two proteins. This measure is exploited to efficiently compute the optimal equivalences. Results Experimental results show that, the current algorithm outperforms the state of the art on benchmark datasets in terms of speed without losing accuracy. Search results on SCOP 95% nonredundant database, for fold similarity with 5 proteins from different SCOP classes show that the current method performs competitively with the standard algorithm CE. The algorithm is also capable of detecting non-topological similarities between two proteins which is not possible with most of the state of the art tools like Dali. PMID:17254310

  20. Evaluation of the influence of dominance rules for the assembly line design problem under consideration of product design alternatives

    NASA Astrophysics Data System (ADS)

    Oesterle, Jonathan; Lionel, Amodeo

    2018-06-01

    The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.

Top